• Jared White ✌️ [HWC]@humansare.social
    link
    fedilink
    English
    arrow-up
    57
    ·
    3 months ago

    Who knew that “simulating” human conversations based on extruded text strings that have no basis in grounded reality or fact could send people into spirals of delusion?

  • minorkeys@lemmy.worldBanned from community
    link
    fedilink
    arrow-up
    35
    ·
    3 months ago

    Are companies who force employees to use LLMs going to be liable for the mental health issues they produce?

  • FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 months ago

    One recent peer-reviewed case study focused on a 26-year-old woman who was hospitalized twice after she believed ChatGPT was allowing her to talk with her dead brother

    I feel like the bar for the turing test is lower than ever… You can’t tell ChatGPT apart from your own relatives??

    • potoooooooo ✅️@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      3 months ago

      My cousin lost her young daughter a few years back. At Christmas, she had used AI to put her daughter in her Christmas photo. I didn’t have words, because it made her so happy, and I can’t fathom her grief, but man. Felt pretty fucked.

      • TheOakTree@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        I feel you. I can’t deny the comfort it brought her, but I also can’t help but feel like it is training her to reject her grief.

        Not that I’m in any position to pass judgement. I just hope it doesn’t lead to anything more severe.

    • Bonifratz@piefed.zip
      link
      fedilink
      English
      arrow-up
      28
      ·
      3 months ago

      That’s what the article says, yes:

      “The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion,” Sakata told the WSJ.

      • Jax@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        3 months ago

        Thing that tells you exactly what you want to hear causes delusions?

        Whaaat?

        I completely understand why articles like this need to exist. Information about what ‘AI’ actually is needs to be spread. That being said, I also can’t remove myself from the impression that this is just incredibly obvious. Like one of those studies about whether a dog actually loves their owner by going to lengths such as an MRI of their brain while looking at their owner.

        Like, thank you mystery researcher on the internet — but you could have saved the helium by just sticking to Occam’s Razor.

  • Zacryon@feddit.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    5
    ·
    3 months ago

    I’d say know your tools. People misusing “stuff” and being vulnerable to it in general is nothing new. Yet, in a lot of cases, we rely on independence and maturity in the decisions people make. This is no different to LLMs. However, of course meaningful (technological) safeguards should be implemented wherever possible.

    • Amberskin@europe.pub
      link
      fedilink
      arrow-up
      6
      ·
      3 months ago

      By their own nature, there is no way to implement robust safeguards in a LLM. The technology is toxic and the best that could happen is anything else, hopefully not based on brute forcing the production of a stream of tokens, is developer and makes obvious LLMs are a false path, a road that should not be taken.