• ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    44
    ·
    7 days ago

    I work as a therapist and if you work in a field like mine you can generally see the pattern of engagement that most AI chatbots follow. It’s a more simplified version of Socratic questioning wrapped in bullshit enthusiastic HR speak with a lot of em dashes

    There are basically 6 broad response types from chatgpt for example with - tell me more, reflect what was said, summarize key points, ask for elaboration, shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission (eg something that might generate a response you could screenshot and would look bad) or if if appears you getting fatigued and need a moment to reflect.

    The first five always come with encouragers for engagement: do you want me to generate a pdf or make suggestions about how to do this? They also have dozens, if not hundreds, of variations so the conversation feels “fresh” but if you recognize the pattern of structure it will feel very stupid and mechanical every time

    Every other one I’ve tried works the same more or less. It makes sense, this is a good way to gather information and keep a conversation going. It’s also not the first time big tech has read old psychology journals and used the information for evil (see: operant conditioning influencing algorithm design and gacha/mobile gaming to get people addicted more efficiently)

    • JacksonLamb@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      That may explain why people who use LLMs for utility/work tasks actually tend to develop stronger parasocial attachments to it than people who deliberately set out to converse with it.

      On some level the brain probably recognises the pattern if their full attention is on the interaction.

    • Smee@poeng.link
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 days ago

      shut down. The last is a fail safe for if you say something naughty/not in line with OpenAI’s mission

      Play around with self-hosting some uncencored/retrained AI’s for proper crazy times.