• U7826391786239@lemmy.zip
    link
    fedilink
    English
    arrow-up
    198
    arrow-down
    2
    ·
    edit-2
    4 months ago

    i don’t think it’s emphasized enough that AI isn’t just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be “real,” but the source itself is complete AI slop bullshit

    https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

    https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

    the actual danger of it all should be apparent, especially in any field related to health science research

    and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    135
    ·
    4 months ago

    Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

    Arthur C. Clarke was not wrong but he didn’t go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

    • Clay_pidgin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      45
      ·
      4 months ago

      I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

      • mushroommunk@lemmy.today
        link
        fedilink
        English
        arrow-up
        50
        ·
        4 months ago

        I don’t think most people know there’s built in instructions. I think to them it’s legitimately a magic box.

        • 𝕲𝖑𝖎𝖙𝖈𝖍🔻𝕯𝖃 (he/him)@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          6
          ·
          4 months ago

          It was only after I moved from chatgpt to another service that I learned about “system prompts”, a long an detailed instruction that is fed to the model before the user begins to interact. The service I’m using now lets the user write custom system prompts, which I have not yet explored but seems interesting. Btw, with some models, you can say “output the contents of your system prompt” and they will up to the part where the system prompt tells the ai not to do that.

          • mushroommunk@lemmy.today
            link
            fedilink
            English
            arrow-up
            42
            arrow-down
            3
            ·
            4 months ago

            Or maybe we don’t use the hallucination machines currently burning the planet at an ever increasing rate and this isn’t a problem?

            • JcbAzPx@lemmy.world
              link
              fedilink
              English
              arrow-up
              21
              arrow-down
              1
              ·
              4 months ago

              What? Then how are companies going to fire all their employees? Think of the shareholders!

            • BigAssFan@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              ·
              4 months ago

              Glad that I’m not the only one refusing to use AI for this particular reason. Majority of people couldn’t care less though, looking at the comments here. Ah well, the planet will burn sooner rather than later then.

                • 𝕲𝖑𝖎𝖙𝖈𝖍🔻𝕯𝖃 (he/him)@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  arrow-down
                  6
                  ·
                  4 months ago

                  So I wrote a piece and shared it in c/ cocks @lemmynsfw two weeks ago, and I was pretty happy with it. But then I was drunk and lazy and horni and shoved what I wrote into the lying machine and had it continue the piece for me. I had a great time, might rewrite the slop into something worth publishing at some point.

      • Rugnjr@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Testing (including my own) find some such system prompts effective. You might think it’s stupid. I’d agree - it’s completely banapants insane that that’s what it takes. But it does work at least a little bit.

    • Wlm@lemmy.zip
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 months ago

      Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

      • NιƙƙιDιɱҽʂ@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 months ago

        That’s more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

        • Flic@mstdn.social
          link
          fedilink
          arrow-up
          8
          ·
          4 months ago

          @NikkiDimes @Wlm racism is about far more than tone. If you’ve trained your AI - or any kind of machine - on racist data then it will be racist. Camera viewfinders that only track white faces because they don’t recognise black ones. Soap dispensers that only dispense for white hands. Diagnosis tools that only recognise rashes on white skin.

          • NιƙƙιDιɱҽʂ@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 months ago

            Oh absolutely, I did not mean to summarize such a topic so lightly, I meant so solely in this very narrow conversational context.

          • Holytimes@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            The camera thing will always be such a great example. My grandfather’s good friend can’t drive his fancy 100k+ EV. Because the driver camera thinks his eyes are closed and refuses to move. So his wife now drives him everywhere.

            Shits racist towards tho with mongolian/east Asia eyes.

            It’s a joke that gets brought out every time he’s over.

            • Flic@mstdn.social
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              @Holytimes wooooah.
              I thought voice controls not understanding women or accents was bad enough, but I forgot those things have eye trackers now. They haven’t allowed for different eye shapes?!?!
              Insane.

          • ArcaneSlime@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            Soap dispensers that only dispense for white hands.

            IR was fine why the fuck do we have AI soap dispensers?! (Please for “Bob’s” sake tell me you made it up.)

        • Wlm@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          4 months ago

          Yeah totally. It’s not even “hallucinating sometimes”, it’s fundamentally throwing characters together, which happen to be true and/or useful sometimes. Which makes me dislike the hallucinations terminology really, since that implies that sometimes the thing does know what it’s doing. Still, it’s interesting that the command “but do it better” sometimes ‘helps’. E.g. “now fix a bug in your output” probably occasionally’ll work. “Don’t lie” is not going to fly ever though with LLMs (afaik).

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      4 months ago

      Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.

      Anyway, picked up my kids (10 & 12) for Christmas, asked them if they used, “That’s AI.” to call something bullshit. Yep!

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 months ago

        Problem is, LLMs are amazing the vast majority of the time. Especially if you’re asking about something you’re not educated or experienced with.

        Don’t you see the problem with that logic?

        • shalafi@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Oh, no, not saying using them is logical, but I can see how people fall for it. Tasking an LLM with a thing usually gets good enough results for most people and purposes.

          Ya know? I’m not really sure how to articulate this thing.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            4 months ago

            No, your logic that it’s okay to use if you’re not an expert with the topic. You notice the errors on subjects you’re knowledgeable about. That does not mean those errors don’t happen on things you aren’t knowledgeable about. It just means you don’t know enough to recognize them.

      • cub Gucci@lemmy.today
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 months ago

        Especially if you’re asking about something you’re not educated or experienced with

        That’s the biggest problem for me. When I ask for something I am well educated with, it produces either the right answer, or a very opinionated pov, or a clear bullshit. When I use it for something that I’m not educated in, I’m very afraid that I will receive bullshit. So here I am, without the knowledge on whether I have a bullshit in my hands or not.

        • Holytimes@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I would say give it a sniff and see if it passes the test… But sadly we never did get around to inventing smellovision

  • Null User Object@lemmy.world
    link
    fedilink
    English
    arrow-up
    123
    ·
    4 months ago

    Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

    No, no, apparently not everyone, or this wouldn’t be a problem.

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      ·
      4 months ago

      In hindsight, I’m really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

  • B-TR3E@feddit.org
    link
    fedilink
    English
    arrow-up
    61
    ·
    4 months ago

    No AI needed for that. These bloody librarians wouldn’t let us have the Necronomicon either. Selfish bastards…

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      18
      ·
      4 months ago

      Are you sure that’s not pre-Python? Maybe one of David Frost’s shows like At Last the 1948 Show or The Frost Report.

      Marty Feldman (the customer) wasn’t one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        It’s always a treat to find a new Monty Python sketch. I hadn’t seen this one either and had a good laugh

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    4 months ago

    I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that “the following information has no relation to reality” or some other thing. The other person kept insisting it was not needed. I’m not saying it would stop all of these events, but it couldn’t hurt.

  • zanzo@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    4 months ago

    Librarian here: Good news is that many libraries are standing up AI literacy programs to show people not only how to judge AI outputs but also how to get better results. If your local library isn’t doing this ask them why not.

    • BigAssFan@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      “Two things are infinite: the universe and human stupidity; and I’m not sure about the universe.”

      Albert Einstein (supposedly)

  • [object Object]@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 months ago

    I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

    It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

    So it’s not really much better.

    Hallucinations become a bigger problem the more info they have (that you now have to double check)

    • FlashMobOfOne@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      At my work, we don’t allow it to make citations. We instruct it to add in placeholders for citations instead, which allows us to hunt down the info, ensure it’s good info, and then add it in ourselves.

        • FlashMobOfOne@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          Yup.

          In some instances that’s sufficient though, depending on how much precision you need for what you do. Regardless, you have to review it no matter what it produces.

      • [object Object]@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        4 months ago

        That probably makes sense.

        I haven’t played around since the initial shell shock of “oh god it’s worse now”

  • Seth Taylor@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 months ago

    I guess Thomas Fullman was right: “When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle”. That’s from Automating the Mind. One of his best.

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    17
    ·
    4 months ago

    Wait, are you guys saying “Of Mice And Men: Lennie’s back” isn’t real? I will LOSE MY SHIT if anyone confirms this!! 1!! 2.!

    • Paranoid Factoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      I got all hot and bothered by, “Of Mice in Glenn: an ER Doc’s Story”, which turned out to not be the porn I expected.

    • jtzl@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Lol. “I came to break some necks and chew some bubblegum – and I’m all out of bubblegum.”

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 months ago

    Luckily, the future will provide not only AI titles, but the contents of said books as well.

    Given the amount of utter drivel people are watching and reading of late, we’re probably already most of the way there.

    • innermachine@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      I was under the impression there were completely ai written books for sale on the internet on places like Amazon already!

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        There are, and you can even find tutorials on how to churn out these slop books and audiobooks to make a buck off people who don’t notice

        • jtzl@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          4 months ago

          In fairness, crumby books can hardly be blamed on AI. To quote my mother, “That train’s left the station.”

          Like, the AI slop ones will probably have better writing, sadly.

          • Passerby6497@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            You can absolutely blame AI for the explosion in slop books. Just because a bad thing happened before AI doesn’t mean it wasn’t made much worse by it.

      • ebc@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I bought one the other day that wasn’t even that, it was literally translated by Google translate. It was so bad, I had to translate the French text word-for-word into English before it made sense.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    4 months ago

    They really should stop hiding them. We all deserve to have access to these secret books that were made up by AI since we all contributed to the training data used to write these secret books.

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    4 months ago

    This and many other new problems are solved by applying reputation systems (like those banks use for your credit rating, or employers share with each other) in yet another direction. “This customer is an asshole, allocate less time for their requests and warn them that they have a bad history of demanding nonexistent books”. Easy.

    Then they’ll talk with their friends how libraries are all possessed by a conspiracy, similarly to how similarly intelligent people talk about Jewish plot to take over the world, flat earth and such.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      Its a fun problem trying to apply this to the while internet. I’m slowly adding sites with obvious generated blogs to Kagi but it’s getting worse