• sexhaver87@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    The inner workings of “AI” (see: large language model) are nothing more than a probabilistic game of guess the next token. The inner workings of human intelligence and consciousness are not fully understood by modern science. Our thought processes are somehow “better” because the artificial version of them are a cheap imitation that’s practically no better than flipping a coin, or rolling a die.

    • Malfeasant@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 hours ago

      You say our thought processes are not well understood, yet you’re so sure it’s better. How are you so sure?

      • sexhaver87@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.