You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • callouscomic@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    10
    ·
    7 months ago

    “Most published journal articles are horseshit, so I guess we should be okay with this too.”

    • Turun@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      No, it’s simply contradicting the claim that it is possible.

      We literally don’t know how to fix it. We can put on bandaids, like training on “better” data and fine-tune it to say “I don’t know” half the time. But the fundamental problem is simply not solved yet.