I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    13
    ·
    7 months ago

    That analogy is hard to come up with because the question of whether it even comprehends meaning requires first answering the unanswerable question of what meaning actually is and whether or not humans are also just spicy pattern predictors / autocompletes, since predicting patterns is like the whole point of evolving intelligence, being able to connect cause and effect in patterns and anticipate the future just helps with not starving. The line is far blurrier than most are willing to admit and ultimately hinges on our experience of sapience rather than being able to strictly define knowledge and meaning.

    Instead it’s far better to say that ML models are not sentient, they are like a very big brain that’s switched off, but we can access it by stimulating it with a prompt.

    • Hucklebee@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      Interesting thoughts! Now that I think about this, we as humans have a huge advantage by having not only language, but also sight, smell, hearing and taste. An LLM basically only has “language.” We might not realize how much meaning we create through those other senses.

      • CodeInvasion@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        7 months ago

        To add to this insight, there are many recent publications showing the dramatic improvements of adding another modality like vision to language models.

        While this is my conjecture that is loosely supported by existing research, I personally believe that multimodality is the secret to understanding human intelligence.