• Malfeasant@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 hours ago

    You say our thought processes are not well understood, yet you’re so sure it’s better. How are you so sure?

    • sexhaver87@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      Our thought processes contain mechanisms that are not well understood, to the point where LLMs (as a side effect of human programmers’ inability, not to their discredit) are unable to effectively mimic them. An example of this would be ChatGPT encouraging self-harm where a human assistant intelligence would not.