The researchers behind the simulation say there is a risk of this happening for real in the future.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    4
    ·
    2 years ago

    In this case, it decided that being helpful to the company was more important than its honesty.

    It did no such thing. It doesn’t know what those things are. “LLM AI” is not a conscious thinking being and treating it like it is will end badly. Giving an LLM any responsibility to act on your behalf automatically is a crazy stupid idea at this point in time. There needs to be a lot more testing and learning about how to properly train models for more reliable outcomes.

    It’s almost impressive how quickly humans seem to accept something as “human” just because it can form coherent sentences.

    • KnitWit@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 years ago

      Forming coherent sentences puts it above large sections of the population. Eventually they’re going to have to dumb down the speech output, ala Dubya during his presidency. Add to that all the conditioning to trust authoritative sources and this is going to turn into a real problem sooner rather than later. I think one of the first things to come out that will really cause damage is replacing teachers with ai. If all those teachers out there would quit asking to make more money than a 12 year old in a meat packing plant, maybe this wouldn’t happen, but I digress… (Kudos to all the teachers out there, obviously.)