Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.

Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.

The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.

Note that the study with its original title got far less upvotes than the click-bait summary did 🤡

  • DownToClown@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there’s a clear link between vaccines and autism.

    Neat.

    • Arthur Besse@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 months ago

      Thanks for pointing this out. Looking closer I see that that “journal” was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they’re also anti-trans, and they’re gold bugs… and they’re asking tough questions like “do viruses exist” 🤡

      I edited the post to link to MIT instead, and added a note in the post body explaining why.

    • Tad Lispy@europe.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      Thanks for the warning. Here’s the link to the original study, so we don’t have to drive traffic to that guys website.

      https://arxiv.org/abs/2506.08872

      I haven’t got time to read it and now I wonder if it was represented accurately in the article.

  • Wojwo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 months ago

    Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.

    • ALoafOfBread@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.

  • Tracaine@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I don’t refute the findings but I would like to mention: without AI, I wasn’t going to be writing anything at all. I’d have let it go and dealt with the consequences. This way at least I’m doing something rather than nothing.

    I’m not advocating for academic dishonesty of course, I’m only saying it doesn’t look like they bothered to look at the issue from the angle of:

    “What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?”