Just your daily reminder to not trust or at the very least, fact check whatever chatgpt spews out because not only does it blatantly lie, but it also makes stuff way more than youd want to believe.

(btw batrapeton doesnt exist and is a fictional genus of jurassic amphibians that I made up for a story that I am writing. They never existed in any way shape or form and neither is there any trace of info about them online yet here we are with chatgpt going “trust me bro” about them lol)

  • Hawk@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 hours ago

    ChatGPT learns from your previous threads.

    If you’re using ChatGPT for your writing, it probably used that as information to answer the question.

    After asking it a similar question, it answered in a similar way.

    When asking for sources, it spit out information about a name that’s very similar, which it seems to have used too to describe the fictional species.

    When pressed a little more, it even linked this very post.

  • CheesyFox@lemmy.sdf.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    4
    ·
    14 hours ago

    you just asked it to imagine what the inexistant word would mean, than complained that it did it’s job?

    lmao

    like, i thought this community is for people sharing the hate for cheap corpo hype over ai, not trying to hype up the hate for the otherwise useful instrument. You’re swaying from one extreme to anothar.

  • ɔiƚoxɘup@infosec.pub
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    14 hours ago

    I’m no AI proponent, but phrasing is important. Would should be replaced with does. Would implies a request for speculation, specifically, or even actively creative output.

    As in, if it existed, what would…

  • pedz@lemmy.ca
    link
    fedilink
    arrow-up
    18
    ·
    1 day ago

    LLMs can’t say they don’t know. It’s better for the business to make up some bullshit than just say “I don’t know” because it would show how useless they can be.

    • DanVctr@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      You’re right, but for a different reason as well. The way these models are trained is by “taking tests” over and over. Wrong answers, as well as saying “I don’t know”, both score a 0. Only the right answer is a 1.

      So it might get the question right by making stuff up/guessing, but will always be punished for admitting a gap in knowledge.

  • cronenthal@discuss.tchncs.de
    link
    fedilink
    arrow-up
    41
    ·
    1 day ago

    Whenever someone confidently states “I asked ChatGPT…” in a conversation, I die a little inside. I’m tired of explaining this shit to people.

  • Prontomomo@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    1 day ago

    All LLMs act like improv artists, they almost never stop riffing because they always say “yes and”

  • Tarquinn2049@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    3
    ·
    edit-2
    1 day ago

    Your specific wording is telling it to make up an answer.

    What “would” this word mean? Implying it doesn’t mean anything currently, so guess a meaning for it.

    But yes, in general always assume they don’t know what they are saying, as they aren’t really capable of knowing. They do a really good job of mimicking knowledge, but they don’t actually know.

    • Squirliss@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 day ago

      Yes that is true and thanks for pointing it out. If Im being honest here I wasnt even sure if Batrapeton was a valid name and the reason I was searching it up was to make a blatantly amphibian coded name that also wasnt already a real creature that someone had already named and described otherwise I would have to go look for a different name but every name I could come up with seemed to already be taken and described by someone or the other so I decided to google it just in case and saw that there was nothing on them chatgpt had just made that up. I wish AI had a thing in which it could inform the user that “this is what it would possibly be but it doesnt actually exist” instead of just guessing like that.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    18
    ·
    1 day ago

    I’ve had LLMbeciles make up an entire discography, track list, and even lyrics of “obscure black metal bands” that don’t exist. It doesn’t take much to have them start to spew non-stop grammatically correct gibberish.

    I’ve also had them make up lyrics for bands and songs that actually exist. Specifically completely made-up lyrics for the song “One Chord Wonders” by The Adverts. And then, when I quote the actual lyrics to correct them, incorporate that into their never-ending hallucinations by claiming that was a special release for a television special, but that the album had their version.

    Despite their version and the real version having entirely different scansion.

    These things really are just hallucination machines.

  • James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    Someone here said that LLM chatbots are always “hallucinating” and it stuck with me. They happen to be correct a lot of the time but they are always making stuff up. That’s what they do that’s how they work.

    • Darkard@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      They pin values to data and use a bit of magical stats to decide if two values are related in anyway and are relevant to what was asked. Then it fluffs the data up with a bit of natural language, and there you go.

      It’s the same algorithms that decide if you want to see an advert about dog food influencers or catalytic converters in your area.

      Algorithmic Interpolation

  • stinerman@midwest.social
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    Yes. One of the original instances of this is to make up a saying and ask it what it means. Like “you can’t shave a cat until it has had its dinner.” It’ll make up what it means.

  • TrickDacy@lemmy.worldM
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    For a while I was thinking I might eventually use AI for more than a code completer. But that looks less likely every day.

    • Squirliss@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Yup. I didnt expect it either, its like it searched upto a certain point to gather info but couldnt find anything conclusive so made up the closest thing to what it found and called it a day. It does bullshit, but it does so very well.