• scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    6 days ago

    Unfortunately, AI is moving at such a pace that this IS the usual Apple delayed-follow. They had to feed the public hype for something like 9 months. And it doesn’t seem like a true fix for hallucination is coming, so they made their choice to move ahead. Frankly I blame Wall Street because at this point they will eviscerate anyone who doesn’t have a demonstrated AI plan and shipped products around it. If anyone is at the core of this craze, it’s investors, because they are still in the “we don’t know how big this thing is going to get” phase with AI. We’re all dealing with the consequences.

    Interestingly though, I’m reminded of the early days of the Internet. People did raise the flag that the Internet wouldn’t have the same reliability as traditional media, because anyone could post anything. And that’s remained true. We have mass disinformation campaigns galore, and also specific incidents of false viral stories like “the Pope has died” which are much like this case, just driven by malicious humans instead of hallucinating software.

    It makes me wonder if the problems with AI will never be truly solved but we will just digest AI and learn to live with it as we have with the internet in general. There is also a comparison in my mind between AI and self-driving cars, because every time one of those has a big fuck up we all shout and point and cry that the tech will never be trustable, meanwhile human drivers are out there killing by the hundreds of thousands annually and we don’t even blink at that anymore.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      the problems with (the current forms of generative) AI will not be solved, because they cannot be solved. They are intrinsic to the whole framework.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        Well the problems to be solved aren’t necessarily the technical ones. Another way of “solving” the problems is to stop trying to use it in contexts where it’s limitations are more trouble than they are worth.

        Here it is being tasked with and falling to accurately summarize news, which is ridiculous because those news articles come with summaries already, headlines.

        So a fix may not mean fixing the summary, but just skipping the attempt as superfluous.

        There are uses for the state of LLMs as they are, but hard to appreciate when it’s being crammed down our throats relentlessly at things we never needed them for and watch them screw things up.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        Error correction is also intrinsic to all of computing and telecommunications, though. That’s a loose comparison but I hope we can make progress on this and get it to a manageable state, even if zero is impossible in principle. A lot of things in life only asymptotically approach zero and yet we live.

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          4 days ago

          This is not error correction issue though. Error correction means taking known data and adding redundancy to it so that damaed pieces can be repaired. This makes the message longer.

          An llm’s output does not contain error correction. It’s just the output. And it doesn’t contain any errors, mathematically speaking. The hallucination is the correct output. It is what the statistics it gathered from its training set determined is most likely. A “correct” llm output is indistinguishable from a “hallucination”, mathematically, and always will be. A hallucination is simply “some output that some human, somewhere, doesn’t like”, and that’s uncomputable. And outputs that people subjectively consider as “hallucinations” cannot be eliminated, because an llm is, fundamentally, a probabilistic algorithm. If you added error correction to an llm’s output all you’d be able to recover is the llm’s original output, “hallucinations” and all.

          Tldr: “hallucinations” are a subjective thing. A Hallucination" is not an error that can be corrected after-the-fact, because it is not an error in the first place.