Just your daily reminder to not trust or at the very least, fact check whatever chatgpt spews out because not only does it blatantly lie, but it also makes stuff way more than youd want to believe.
(btw batrapeton doesnt exist and is a fictional genus of jurassic amphibians that I made up for a story that I am writing. They never existed in any way shape or form and neither is there any trace of info about them online yet here we are with chatgpt going “trust me bro” about them lol)
ChatGPT learns from your previous threads.
If you’re using ChatGPT for your writing, it probably used that as information to answer the question.
After asking it a similar question, it answered in a similar way.
When asking for sources, it spit out information about a name that’s very similar, which it seems to have used too to describe the fictional species.
When pressed a little more, it even linked this very post.
you just asked it to imagine what the inexistant word would mean, than complained that it did it’s job?
lmao
like, i thought this community is for people sharing the hate for cheap corpo hype over ai, not trying to hype up the hate for the otherwise useful instrument. You’re swaying from one extreme to anothar.
Works as intended (if not as advertised)
just as always
I’m no AI proponent, but phrasing is important. Would should be replaced with does. Would implies a request for speculation, specifically, or even actively creative output.
As in, if it existed, what would…
Because AI is a predictive transformer/generator, not an infinite knowledge machine.
LLMs can’t say they don’t know. It’s better for the business to make up some bullshit than just say “I don’t know” because it would show how useless they can be.
You’re right, but for a different reason as well. The way these models are trained is by “taking tests” over and over. Wrong answers, as well as saying “I don’t know”, both score a 0. Only the right answer is a 1.
So it might get the question right by making stuff up/guessing, but will always be punished for admitting a gap in knowledge.
Whenever someone confidently states “I asked ChatGPT…” in a conversation, I die a little inside. I’m tired of explaining this shit to people.
Same. I just quit trying to correct them after a point.
First time? This is indeed how LLMs work.
Bro read the text under the got chat box. Lmao
I read it. I don’t think a community called “Fuck AI” needs a daily reminder. We all know it sucks!
I’m here for exactly these memes.
It’s cause you said “would” not “does”
All LLMs act like improv artists, they almost never stop riffing because they always say “yes and”
But they’re not funny :(
Your specific wording is telling it to make up an answer.
What “would” this word mean? Implying it doesn’t mean anything currently, so guess a meaning for it.
But yes, in general always assume they don’t know what they are saying, as they aren’t really capable of knowing. They do a really good job of mimicking knowledge, but they don’t actually know.
Yes that is true and thanks for pointing it out. If Im being honest here I wasnt even sure if Batrapeton was a valid name and the reason I was searching it up was to make a blatantly amphibian coded name that also wasnt already a real creature that someone had already named and described otherwise I would have to go look for a different name but every name I could come up with seemed to already be taken and described by someone or the other so I decided to google it just in case and saw that there was nothing on them chatgpt had just made that up. I wish AI had a thing in which it could inform the user that “this is what it would possibly be but it doesnt actually exist” instead of just guessing like that.
They always return an answer.
I’ve had LLMbeciles make up an entire discography, track list, and even lyrics of “obscure black metal bands” that don’t exist. It doesn’t take much to have them start to spew non-stop grammatically correct gibberish.
I’ve also had them make up lyrics for bands and songs that actually exist. Specifically completely made-up lyrics for the song “One Chord Wonders” by The Adverts. And then, when I quote the actual lyrics to correct them, incorporate that into their never-ending hallucinations by claiming that was a special release for a television special, but that the album had their version.
Despite their version and the real version having entirely different scansion.
These things really are just hallucination machines.
Someone here said that LLM chatbots are always “hallucinating” and it stuck with me. They happen to be correct a lot of the time but they are always making stuff up. That’s what they do that’s how they work.
They pin values to data and use a bit of magical stats to decide if two values are related in anyway and are relevant to what was asked. Then it fluffs the data up with a bit of natural language, and there you go.
It’s the same algorithms that decide if you want to see an advert about dog food influencers or catalytic converters in your area.
Algorithmic Interpolation
Artificial Imagination
Yes. One of the original instances of this is to make up a saying and ask it what it means. Like “you can’t shave a cat until it has had its dinner.” It’ll make up what it means.
For a while I was thinking I might eventually use AI for more than a code completer. But that looks less likely every day.
I have to give it props for dropping in “dissorophoid temnospondyl” which I figured even odds on also being made up, but it is not!
Yup. I didnt expect it either, its like it searched upto a certain point to gather info but couldnt find anything conclusive so made up the closest thing to what it found and called it a day. It does bullshit, but it does so very well.