Bad humans are prompting these AI engines. Still gotta fix that. You know, root of the problem. I can tell you as an older human, misinformation has been supercharged every election. But yeah let’s blame AI this time around so we don’t have to figure out the tough problem.
Correct. AI is simply a tool. People need to get their heads around this and stop perceiving it as some sentient magical entity with rogue prerogatives and uncontested liberties.
Whenever AI does something whack, that was a human. Everything it knows and does comes from the knowledge and instructions of humans. It’s us. If AI produces misinformation, it’s simply doing what it was taught and instructed by someone, and there lies the source of bullshit.
The problem isn’t the misinformation itself, it’s the rate at which misinformation is produced. Generative models lower the barrier to entry so anyone in their living room somewhere can make deepfakes of your favourite politician. The blame isn’t on AI for creating misinformation, it’s for making the situation worse.
Fallible humans are building them in the first place.
No LLM - masquerading as AI - is free of biases.
That’s not to say that ‘bad’ people prompting biased LLMs is not an issue, it very much is, but even ‘good’ people are not going to get objective results.
Sometimes I wonder if the clown music is just in my head or if it’s the theme music for the past few years.
The biggest misinformation comes from Fox or related ventures in other countries. No AI or deepfakes needed, just classical oligarchic propaganda. But yeah let’s listen to the guys willing to let the world burn for slightly higher profit margins what the big problems in the world are today.
The issue is that I know that Fox has a bias, however AI pulls on this misinformation and remixes it which makes it harder to know if it is true or not. Add to the amount of AI generated garbage on the internet good information is getting harder for your average person to find. So you may have correctly identified a major source of the misinformation, AI masking the source disseminates it far and wide.
Let’s say the New York Times publishes multiple articles and opinion pieces stating that a certain country in the middle east has weapons of mass destruction, and following this in a period 20 years a million people die a violent death in said country. Would you blame this on the printing press, on the people delivering the newspapers, on the word processing software used to write these articles, or on the people willingly pushing lies. The same goes for climate change misinformation or smoking health effects misinformation (and many more examples).
Capital has interests and is willing to go quite far pushing them, the tools they use change but the methods and the culprits stay the same.
Misinformation has been an issue in the public consciousness for almost 10 years now: Since Trump’s run for the presidency in the US and since Russian military aggression became impossible to ignore. The consensus was that it had much to do with social media and how easy it could be manipulated.
I always wonder if this focus on AI is a way to distract from and derail debates about social media regulation.
We live in a world where people think Biden banned abortion because it happened while he was president. What happens when those people start seeing and hearing AI recordings telling them the worst wacko shit you can possibly imagine?
This is the best summary I could come up with:
LONDON (AP) — False and misleading information supercharged with cutting-edge artificial intelligence that threatens to erode democracy and polarize society is the top immediate risk to the global economy, the World Economic Forum said in a report Wednesday.
The report listed misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse.
The authors worry that the boom in generative AI chatbots like ChatGPT means that creating sophisticated synthetic content that can be used to manipulate groups of people won’t be limited any longer to those with specialized skills.
AI-powered misinformation and disinformation is emerging as a risk just as a billions of people in a slew of countries, including large economies like the United States, Britain, Indonesia, India, Mexico, and Pakistan, are set to head to the polls this year and next, the report said.
Fake information also could be used to fuel questions about the legitimacy of elected governments, “which means that democratic processes could be eroded, and it would also drive societal polarization even further,” Klint said.
1 threat, followed by four other environmental-related risks: critical change to Earth systems; biodiversity loss and ecosystem collapse; and natural resource shortages.
The original article contains 523 words, the summary contains 210 words. Saved 60%. I’m a bot and I’m open source!
What a fucking joke. Those monocle wearing cunts at Davos are the biggest threat humanity faces and they fucking know it.
Eat The motherfucking Rich
i remember when it was asbestos.
and then at some point it changed to the ozone.
are we at scary ai now? or is this one just nonsense?
Must be lovely to have cool breeze enter through one ear and leave through another.
what are you smokin. i pointed out actual success stories and asked if this was one of those, or bullshit.
your reply is oh-so-helpful. thanks.
i pointed out actual success stories
That’s not how it comes across. You sound like people who ask ‘why we have forgotten about Ozone layer after the 90s?’.
Asbestos and holes in the o-zone layer were real issues though and both have now been more or less resolved. It’s not like new problems mean the previous ones weren’t valid.
It’s climate