Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
“AI is obviously gonna one-shot the human limbic system,” referring to the part of the brain responsible for human emotions. “That said, I predict — counter-intuitively — that it will increase the birth rate!” he continued without explanation. “Mark my words. Also, we’re gonna program it that way.”
Here’s my idea to increase the birth rate:
Make the world less of an all-consuming dystopian hellscape, so people can actually start and raise a family without ruining themselves, and can feel confident their children won’t have horrible lives.
@dgerard is never going to run out of content for pivot, is he:
BLOOMBERG BREAKING: Sam Altman promises that GPT-6 will generate Ghibli images with levels of piss yellow heretofore “unseen”
Ran across a viral post on Bluesky:
Unsurprisingly, the replies and quotes are universally outraged at the news.
altman is the waluigi to musk’s wario
gemini isn’t even trying now
Not a sneer, but there is this yt’er called the Elephant Graveyard (who I know nothing about apart from these vids) who did a three part series on Joe Rogan, the downfall of comedy, hyperreality, which is weirdly relevant, esp part 3 where suddenly there are some surprise visits.
Part 1: https://www.youtube.com/watch?v=7EuKibmlll4
New article on AI scraping just hit The Register, with some choice quotes from Anubis dev Xe Iaso. Xe herself has given some additional thoughts.
That’s weird, The Register’s versions of the quotes are different (not just pared down).
You don’t say, the register, taking journalistic integrity less than serious? (Hi, it’s me, a person who has been annoyed by their editorial choices for more than two decades now)
Louisiana has to build three new natural gas power plants to accommodate the “AI” data center that Meta just crammed through because said center will use Three Times as much electricity (and, thus, attendant resources) as the Entire City Of New Orleans, every year.
https://bsky.app/profile/wolvendamien.bsky.social/post/3lwyxhchxos2g
Ultra-rare NIMBY W
Not a sneer in the classical sense. It seems that Extropic AI is about to finally ship something.
There is still no actual data about the hardware on their website…
From the people who brought you web3:
Furby3
In six months, they’ll be making a killing selling em to people who are still mourning their AI waifuls and husbandos.
Meanwhile on /r/programmingcirclejerk sneering hn:
transcription
OP: We keep talking about “AI replacing coders,” but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.
Comment: The future of coding is jerking off while waiting for AI managers to do your project for you, then retrying the prompt when they get it wrong. If gooning becomes the de facto way to program, maybe expecting to cum will be baggage we’ll need to unlearn.
Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What’s going to happen when they retire and there’s nobody with that knowledge and experience to take their place? I’ll have sold off my shares by then, I’m sure.
In case you needed more evidence that the Atlantic is a shitty rag.
The implication that Soares / MIRI were doing serious research before is frankly journalist malpractice. Matteo Wong can go pound sand.
It immediately made me wonder about his background. He’s quite young and looks to be just out of college. If I had to guess, I’d say he was probably a member of the EA club at Harvard.
His group chats with Kevin Roose must be epic.
Just earlier this month, he was brushing off all the problems with GPT-5 and saying that “OpenAI is learning from its greatest success.” He wrapped up a whole story with the following:
At this stage of the AI boom, when every major chatbot is legitimately helpful in numerous ways, benchmarks, science, and rigor feel almost insignificant. What matters is how the chatbot feels—and, in the case of the Google integrations, that it can span your entire digital life. Before OpenAI builds artificial general intelligence—a model that can do basically any knowledge work as well as a human, and the first step, in the company’s narrative, toward overhauling the economy and curing all disease—it is aiming to build an artificial general assistant. This is a model that aims to do everything, fit for a company that wants to be everywhere.
Weaselly little promptfucker.
The Atlantic puts the “shit” in “shitlib”
The phrase “adorned with academic ornamentation” sounds like damning with faint praise, but apparently they just mean it as actual praise, because the rot has reached their brains.
also, they misspelled “Eliezer”, lol
I’ve created a new godlike AI model. Its the Eliziest yet.
My copy of “the singularity is near” also does that btw.
(E: Still looking to confirm that this isn’t just my copy, or it if is common, but when I’m in a library I never think to look for the book, and I don’t think I have ever seen the book anywhere anyway. It is the ‘our sole responsibility…’ quote, no idea which page, but it was early on in the book. ‘Yudnowsky’).
Image and transcript
Transcript: Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve…[T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards and all of them will become obvious.
—ELIEZER S. YUDNOWSKY, STARING INTO THE SINGULARITY, 1996
Transcript end.
How little has changed, he has always believed intelligence is magic. Also lol on the ‘smallest bit’. Not totally fair to sneer at this as he wrote this when he was 17, but oof being quoted in a book like this will not have been good for Yudkowskys ego.
deleted by creator
New edition of AI Killed My Job, focusing on how translators got fucked over by the AI bubble.
The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn’t sound like it’s actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you’re editing chatbot output you’re still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.
In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.
It’s also a lot less pleasant of a task, it’s like wearing a straightjacket, and compared to CAT (eg: automatically using glossaries for technical terms) actually slows you down, if the translation is quite far from how you would naturally phrase things.
Source: Parents are Professional translators. (They’ve certainly seen work dry up, they don’t do MTPE it’s still not really worth their time, they still get $$$ for critically important stuff, and live interpreting [Live interpreting is definetely a skill that takes time to learn compared to translation.])
https://bsky.app/profile/robertdownen.bsky.social/post/3lwwntxygqc2w Thiel doing a neo-nazi thing. For people keeping score.