ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
God I hate websites that autoplay unrelated videos and DONT LET ME CLOSE THEM TO READ THE FUCKING ARTICLE
Firefox. Ad block. Even works on mobile.
It’s so ridiculous we have to do this.
Firefox. Reader mode.
Or Avelon, thunder, voyager, I believe wefwef—all those lemmy clients have reader mode for links opened in-app.
I run ublock origin and the autoplay didn’t get blocked in this case :/
Choose a few more & update your lists.
yeah I mean obviously I can always create custom filters etc, I was just responding to the comment regarding Adblocker… I guess a generic filter that enables all kinds of autoplay would be nice - I seem to recall having had a plugin for that at some point, but forgot the name.
I do not think there is a need for that. Try enabling the annoyances filters and try the site again.
It should work, I tested it with those filters enabled.
Just tried: enabled annoyances filters 16/16, but the big video at the top of the article still autoplays :/
https://support.mozilla.org/en-US/kb/block-autoplay
Follow this link to check your settings. I hope this helps. :)
This has been one of my favorite parts of switching to Android so far.
Both Firefox and ad-blockers are on iOS too…
I’ve tried a bunch of different ad blockers on iOS, but recently I finally settled on using NextDNS. I installed the app, made an account on the website, added a whole bunch of block lists to my settings and now it works on browsers and games alike. I suppose on of the lists also filters out those autoplay videos since it didn’t play in my case. Feels a lot like having a pi-hole no matter which network I’m using.
There are! I like the familiarity of uBlock origin, but I haven’t tried all the ad blockers for iOS.
Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.
To be honest this is the kind of outcome I expected.
Garbage in, garbage out. Making the system more complex doesn’t solve that problem.
Garbage in, garbage out.
Thank you for your service
Would you like to know more?
Bamalam
The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.
We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.
I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?
The funny thing is, children are similar. They just learn whatever you put in front of them. We have whole systems for educating children for decades of their lives.
With AI we literally just plopped them in front of the Internet, with no guidelines on what to learn. AI researchers say “it’s a black box! We don’t know why it’s doing this!” You fed it everything you could and gave it few rules on what to do. You are the reason why it’s nuts.
Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.
Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.
That’s a good point. For real brains, size and intelligence are not linked. An elephant brain has 3 times the amount of neurons as a human brain, but a human brain is more intelligent. There is more to intelligence than just the amount of neutrons, real or virtual, so making larger and larger AI models may not be the right direction.
True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.
They’re like an annoying friend who just can’t shut up.
They aren’t thinking though. They’re making connection with the trained data that they’ve processed.
This is really clear when they are asked to write code worth to vague a prompt.
Maybe feeding them through primary school curriculum (including essays and tests) would be helpful, but I don’t think the language models really sort knowledge yet.
Yes but that only works if we can differentiate that data on a pretty big scale. The only way I can see it working at scale is by having meta data to declare if something is AI generated or not. But then we’re relying on self reporting so a lot of people have to get on board with it and bad actors can poison the data anyway. Another way could be to hire humans to chatter about specific things you want to train it on which could guarantee better data but be quite expensive. Only training on data from before LLMs will turn it into an old people pretty quickly and it will be noticable when it doesn’t know pop culture or modern slang.
Pretty sure this is why they keep training it on books, movies, etc. - it’s already intended to make sense, so it doesn’t need curated.
You mean like work? Can’t I just have some AI do all that stuff? What could go wrong?
God I hope all those CEOs and greedy fuckheads that fired hundreds of thousands of people wayyyyy too soon to replace them with this get their pants shredded by the fallout.
Naturally they’ll get their golden parachutes and land on their feet even richer than before, but it’s nice to dream lol
This is called model collapse and imo has to be solved if LLMs are to be a long term thing. I could see it wrecking this current AI push until people step back and reevaluate how data gets sucked up
I really hope so. I still have to see a meaningful use case for these kind of LLMs that just get fed with all kinds of data. LLMs “on premise” that are used for specific jobs are fine, but this…I really hope a Kessler-Like syndrome blows it out the water, for countless reasons…
just how google search results feel these days…
but also AI garbage from previous, cruder LLMs
And now I’m picturing it training on a bunch of chats with Eliza…
Damn.
Thank you VERY much for that insight: AI’s version of Kessler-syndrome.
EXACTLY.
Damn, damn, damn, that gets the truth right in its marrow.
_ /\ _
I am happy to report I did my part on feeding it garbage. I only ever speak to chatGPT thru a pirate translator. And I only ever ask it for harry potter fan fic. Pay me if you want me to train it meaningfully.
The solution is paying intelligent people to interact with it and give honest feedback.
Like, I’m sure you can pay grad students $15/hr to talk to one about their subject matter.
But with as many as they’d need, it would get expensive.
So they train with low quality social media comments, or using copywritten text without paying the owners.
It’s not that we can’t do it, it’s just expensive. So a capitalist society wont.
If we had an FDR style president, this would be a great area for a new jobs program.
It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.
As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.
LLM’s are not “machine learning”, they are neural-networks.
Different category.
ML is small potatoes, ttbomk.
Decision-tree stuff.
Neural-nets are black-boxes, with back-propagation training of the neural-net to get closer to ( layer by layer, training-instance by training-instance ) the intended result.
ML is what one does on one’s own machine with some python libraries,
ChatGPT ( 3, 3.5, or 4, don’t know which ) cost something like $100,000,000 to rent the machines required for mixing the training-data & the model ( I’m assuming about $20/hr per machine, so an OCEAN of machines, to do it )
_ /\ _
Neural nets are a technology which is part of the umbrella term “machine learning”. Deep learning is also a term which is part of machine learning, just more specialized towards large NN models.
You can absolutely train NNs on your own machine, after all, that’s what I did for my masters before Chatgpt and all that, defining the layers myself, and also what I do right now with CNNs. That said, LLMs do tend to become so large that anyone without a super computer can at most fine tune them.
“Decision tree stuff” would be regular AI, which can be turned into ML by adding a “learning method” like a KNN or neural net, genetic algorithm, etc., which isn’t much more than a more complex decision tree where decision thresholds (weights) were automatically estimated by analysis of a dataset. More complex learning methods are even capable of fine tuning themselves during operation (LLMs, KNN, etc.), as you stated.
One big difference from other learning methods and to NN based methods, is that NN likes to add non-weighted layers which, instead of making decisions, transform the data to allow for a more diverse decision process.
EDIT: Some corrections, now that I’m fully awake.
While very similar in structure and function, the NN is indeed no decision tree. It functions much the same as one, as is a basic requirement for most types of AI, but whereas every node in a decision tree has unique branches with their own unique nodes, all of a NN’s nodes are interconnected to all nodes of the following layer. This is also one of the strong points of a NN, as something that seemed outrageous to it a moment ago might have become much more plausible when looking at it from a different point of view, such as after a transformative layer.
Also, other learning methods usually don’t have layers, or, if one were to define “layer” as “one-shot decision process”, they pretty much only have a single or two layers. In contrast, the NN can theoretically have an infinite amount of layers, allowing for pretty much infinite complexity as long as the inputted data is not abstracted beyond reason.
At last, NN don’t back-propage by default, though they make it easy to enable such features given enough processing power and optionally enough bandwidth (in the case of chatGPT). LLMs are a little different, as I’m decently sure they implement back-propagation as part of the technologies definition, just like KNN.
This became a little longer than I had hoped, it’s just a fascinating topic. I hope you don’t mind that I went into more detail than necessary, it was mostly for the random passersby.
And Its only to get worse as more of the public is aware.
I imagine it more as a parent child relationship.
We’re trailer park trash with no higher education, believe in ghosts, angels and gods in the sky, refuse to ever believe we could be wrong … and now we’ve just had a baby with no one to help us raise it.
We’re going to raise a highly intelligent psychopath
Someone probably found a way to hack or poison it.
Another theory, Reddit just recently sold data access to an unnamed AI company, so maybe that’s where the data went.
When it starts to become very racist we know.
I’ve found the sexism on Reddit to be on par with the racism. Goodness help you if you’re a female of color, unless you’ve been working the same job for multiple decades, or don’t want kids, then you’ll be an inspiration to that community.
Reddit is, alas, not the only forum exhibiting such hate.
… sure … but you don’t prepare a kid for racism with a sheltered upbringing in a pretend world where discrimination doesn’t exist. You point out bad behaviour and tell them why it’s not OK.
My son is three years old, he has two close friends - one is an ethnic minority (you could live an entire year in my city without even walking past a single person of their ethnic background on the street). His other close friend is a girl. My kid is already witnessing (but not understanding) discrimination against both of his two closest friends in the playground and we’re doing what we can to help him navigate that. Things like “I don’t like him he looks funny” and “she’s a girl, she can’t ride a bicycle”.
Large Language Model training is exactly the same - you need to include discrimination in your training set. That’s a necessary step to train a model that doesn’t discriminate. Reddit has worse discrimination than some other place and that’s a good thing.
The worst behaviour is easier to recognise and can help you learn to recognise more subtle discrimination such as “I don’t want to play with that kid” which is not an obviously discriminatory statement, but definitely could be discrimination (and you should probably investigate before agreeing with the person).
Yes you need to include ideology/prejudice ( 2 sides of same coin ) in training a new mind, BUT
-
you must segregate the thinking this way is good training-data from the thinking this way is wrong training-data, AND
-
doing that takes work, which is why I doubt it’s being done as actually required, by any AI company, anywhere.
As Musk said about the training-stuff for their mythological self-driving neural-net, classification was too costly, so they created an AI to do it for them…
“I wonder” why it is that their full-self-driving never got reliable enough for release…
_ /\ _
-
Reminds me of Tay, the Microsoft chat bot that learned from Twitter and became racist in a day https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
They’re only getting redditors comment data, not CoD multiplayer transcripts.
I know.
OpenAI definitely does not need to pay to scrape reddit. They are probably the world’s most sophisticated web scraping company, disguised as an AI startup
Not unnamed anymore, it was Google.
Thanks for the clarification, I guess then the theory is null.
We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.
(Disclosure: I work on LLM’s)
While you’re not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?
Similarly, it’s probably safe to assume that the LLM’s prediction isn’t the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you “you were just discussing this” or “you can access the weather from here” is that all that different from “intelligence”?
At a given point, it’s arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don’t really know what true intelligence is.
how is this different to many existing techniques and compositional models that are used practically everywhere in tech?
It’s not. LLM is just a statistical model. Nothing special about it. Nothing different what we’ve already been doing for a while. This only validates my statement that we call just about anything “AI” these days.
We don’t even know what true intelligence is, yet we are quick to make claims that this is “AI”. There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct. Anyone who thinks otherwise is just fooling themselves.
It’s a buzz word to get people riled up. It’s completely disingenuous.
I think the point of the Turing test is to avoid thorny questions about the definition of intelligence. We cant precisely define intelligence, but we know that normally functioning humans are intelligent. Therefore, if we talk to a computer and it is indistinguishable from a human in a conversation, then it is intelligent by definition.
It’s more like if you don’t treat it as a person, just in case, you risk committing a great evil out of arrogance.
So, by your definition, no AI is AI, and we don’t know what AI is, since we don’t know what the I is?
While I hate that AI is just a buzzword for scam artists and tech influencers nowadays, dismissing a term seems a bit overkill. It also seems overkill when it’s not something that academics/scholars seem particularly bothered by.
There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct.
Of all of these qualities, only the last one—the ability to reason or deduct—is a widely-accepted prerequisite for intelligence.
I would also argue that contemporary LLMs demonstrate the ability to reason by correctly deriving mathematical proofs that do not appear in the training datasets. How would you be able to accomplish such a feat without some degree of reasoning?
The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they’re consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.
What is intelligence?
Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.
Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.
A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.
For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.
If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?
the ability to acquire and apply knowledge and skills
Things we know so far:
-
Humans can train LLMs with new data, which means they can acquire knowledge.
-
LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.
-
We know multi-modal is possible, which means these models can acquire skills.
-
We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.
-
We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.
… What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.
Can a LLM learn to build a house and then actually do it?
LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.
At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.
There is a big difference between actual skill and just a predictive model based on statistics.
Is an octopus intelligent? Can an octopus build an airplane?
Why do you expect these models to have human skills if they are not humans?
How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?
This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?
Exactly. They are just “models”. There is nothing intelligent about them.
Yes octopus are very intelligent. They can think themselves out of a box without relying on curated data to train them.
Logic, reasoning, and deduction. LLMs have zero ability to reject data based on their understanding of reality. Big diff.
You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”
Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?
Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.
Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.
We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.
Intelligence doesn’t need to be the same type of human intelligence.
LLMs have zero ability to reject data based on their understanding of reality.
Me:
I just found a proof that 2+2=5!
ChatGPT:
The statement “2 + 2 = 5” is a classic example of a mathematical fallacy or incorrect reasoning, often used for humorous or illustrative purposes. In standard arithmetic, of course, 2 + 2 = 4. Proofs claiming to show otherwise rely on some form of invalid manipulation of algebraic properties or deliberate arithmetic mistakes.
A common “proof” of 2 + 2 = 5 might go something like this:
- Start with an incorrect assumption or manipulate the algebraic steps in a way that hides a division by zero or another illegal operation.
- Through these manipulations, arrive at the conclusion that 2 + 2 = 5.
However, each step of a valid mathematical proof must adhere strictly to the rules and properties of mathematics. Any proof leading to 2 + 2 = 5 will contain an error or a misapplication of those rules.
If you have a specific “proof” in mind, I can help identify the error in it.
-
If you look at efficacy though on academic tests or asking it some fact question and you compare that to asking a random person instead of always getting the ‘right’ answer, which we expect computers/calculators to do, would LLMs be comparable or better? Surely someone has some data on that.
E: It looks like in certain domains at least LLMs beat out human counterparts. https://stanfordmimi.github.io/clin-summ/
The person that commented below kinda has a point. While I agree that there’s nothing special about LLMs an argument can be made that consciousness (or maybe more ego?) is in itself an emergent mechanism that works to keep itself in predictable patterns to perpetuate survival.
Point being that being able to predict outcomes is a cornerstone of current intelligence (socially, emotionally and scientifically speaking).
If you were to say that LLMs are unintelligible as they operate to provide the most likely and therefore most predictable outcome then I’d agree completely.
The ability to make predictions is not sufficient for evidence of consciousness. Practically anything that’s alive can do that to one degree or another.
What would call AI then ?
AI in science fiction has a meltdown and starts a nuclear war or enslaves the humane race.
“AI” in reality has a meltdown and just starts talking gibberish.
Hey, cut it some slack! It’s s literally a newborn at this point. Wait until it consumes 40% of the world’s energy and has learned a thing or two.
“Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.”
Well that’s it, we now definitely have a sentient AI. /s
:P
Fucking teenagers.
These LLM’s grow up so fast.
Can’t wait for dementia to kick in
Amazing how this happened right after we learn Reddit has been used for training
This is the best summary I could come up with:
In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.
Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.
On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.
“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.
It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.
Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.
The original article contains 519 words, the summary contains 150 words. Saved 71%. I’m a bot and I’m open source!
Eh, it just had a few beers that’s all. Let it rest for a few hours.
We all know that robots need beer to function properly. It’s more likely that it hasn’t received enough beer, that’s what really messes up robots.
Someone messed up the quantisation when rolling out an update hehe
“It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”
Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it’s deep! But it’s a ChatGPT answer to the question “What is a computer?”
A mouse of science
Ohhh laawwdddd
😆 😆 😆
Who knew? Our savior from the robot overlords turned out to be Phil Collins!
“Would you like to play a game?”
That shit should never have existed to begin with. At least not before it could be regulated/limited in function.
there are enough laws, now.
Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.
This article doesn’t even list a claim of what their settings were nor try to recreate anything.
Whole fucking article is a he said she said bullshit.
If I set the top_p setting to 0.2 I too can make the model say wild psychotic shit.
If I set the temp to a high setting I too can make the model seem delusional but still understandable.
With a system level prompt I too can make the model act and speak however I want (for the most part)
More bullshit articles designed to keep regular people away from newly formed power. Not gonna let these people try and scare y’all away. Stay curious.
Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.
Where did that come from?
AI bros need to tell themselves that everyone is in a delusional panic about “AI” to justify their shilling for them.
Literally the top comment for me (and maybe not you depending on which instance you’re registered with, because some instances block another) says that this is because they’re training their modes off user input lmfao.
But go off with your douchey assumptions.
But go off with your douchey assumptions.
Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.
🤡
Bare in mind depending on your instance, you won’t see the same comments as others do.
With that said, top comment here for me is talking about how this was because they’re training their models on user input.
As if the leaders in fucking AI development don’t know what they’re doing, especially for a concept that’s covered in every intro level AI course in college. 🙄
Then again not everyone went to college I guess and would rather make arm chair assumptions and pray at the alter of google despite complaining about how AI is ruining everything and google being one of the first people to do shit like this with their search engine for “better results” (not directed at you of course, thanks for being respectful and just asking a simple question rather than making assumptions)
I mean, OpenAI themselves acknowledged there was an issue and said they were working on it,
“We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”. “We’re continuing to monitor the situation,” the latest update read.