Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
altman is the waluigi to musk’s wario
gemini isn’t even trying now
Not a sneer, but there is this yt’er called the Elephant Graveyard (who I know nothing about apart from these vids) who did a three part series on Joe Rogan, the downfall of comedy, hyperreality, which is weirdly relevant, esp part 3 where suddenly there are some surprise visits.
Part 1: https://www.youtube.com/watch?v=7EuKibmlll4
New article on AI scraping just hit The Register, with some choice quotes from Anubis dev Xe Iaso. Xe herself has given some additional thoughts.
That’s weird, The Register’s versions of the quotes are different (not just pared down).
You don’t say, the register, taking journalistic integrity less than serious? (Hi, it’s me, a person who has been annoyed by their editorial choices for more than two decades now)
Louisiana has to build three new natural gas power plants to accommodate the “AI” data center that Meta just crammed through because said center will use Three Times as much electricity (and, thus, attendant resources) as the Entire City Of New Orleans, every year.
https://bsky.app/profile/wolvendamien.bsky.social/post/3lwyxhchxos2g
Ultra-rare NIMBY W
Not a sneer in the classical sense. It seems that Extropic AI is about to finally ship something.
There is still no actual data about the hardware on their website…
From the people who brought you web3:
Furby3
In six months, they’ll be making a killing selling em to people who are still mourning their AI waifuls and husbandos.
Meanwhile on /r/programmingcirclejerk sneering hn:
transcription
OP: We keep talking about “AI replacing coders,” but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.
Comment: The future of coding is jerking off while waiting for AI managers to do your project for you, then retrying the prompt when they get it wrong. If gooning becomes the de facto way to program, maybe expecting to cum will be baggage we’ll need to unlearn.
Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What’s going to happen when they retire and there’s nobody with that knowledge and experience to take their place? I’ll have sold off my shares by then, I’m sure.
In case you needed more evidence that the Atlantic is a shitty rag.
The implication that Soares / MIRI were doing serious research before is frankly journalist malpractice. Matteo Wong can go pound sand.
It immediately made me wonder about his background. He’s quite young and looks to be just out of college. If I had to guess, I’d say he was probably a member of the EA club at Harvard.
His group chats with Kevin Roose must be epic.
Just earlier this month, he was brushing off all the problems with GPT-5 and saying that “OpenAI is learning from its greatest success.” He wrapped up a whole story with the following:
At this stage of the AI boom, when every major chatbot is legitimately helpful in numerous ways, benchmarks, science, and rigor feel almost insignificant. What matters is how the chatbot feels—and, in the case of the Google integrations, that it can span your entire digital life. Before OpenAI builds artificial general intelligence—a model that can do basically any knowledge work as well as a human, and the first step, in the company’s narrative, toward overhauling the economy and curing all disease—it is aiming to build an artificial general assistant. This is a model that aims to do everything, fit for a company that wants to be everywhere.
Weaselly little promptfucker.
The Atlantic puts the “shit” in “shitlib”
The phrase “adorned with academic ornamentation” sounds like damning with faint praise, but apparently they just mean it as actual praise, because the rot has reached their brains.
also, they misspelled “Eliezer”, lol
I’ve created a new godlike AI model. Its the Eliziest yet.
My copy of “the singularity is near” also does that btw.
(E: Still looking to confirm that this isn’t just my copy, or it if is common, but when I’m in a library I never think to look for the book, and I don’t think I have ever seen the book anywhere anyway. It is the ‘our sole responsibility…’ quote, no idea which page, but it was early on in the book. ‘Yudnowsky’).
Image and transcript
Transcript: Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve…[T]here are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards and all of them will become obvious.
—ELIEZER S. YUDNOWSKY, STARING INTO THE SINGULARITY, 1996
Transcript end.
How little has changed, he has always believed intelligence is magic. Also lol on the ‘smallest bit’. Not totally fair to sneer at this as he wrote this when he was 17, but oof being quoted in a book like this will not have been good for Yudkowskys ego.
deleted by creator
New edition of AI Killed My Job, focusing on how translators got fucked over by the AI bubble.
The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn’t sound like it’s actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you’re editing chatbot output you’re still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.
In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.
It’s also a lot less pleasant of a task, it’s like wearing a straightjacket, and compared to CAT (eg: automatically using glossaries for technical terms) actually slows you down, if the translation is quite far from how you would naturally phrase things.
Source: Parents are Professional translators. (They’ve certainly seen work dry up, they don’t do MTPE it’s still not really worth their time, they still get $$$ for critically important stuff, and live interpreting [Live interpreting is definetely a skill that takes time to learn compared to translation.])
https://bsky.app/profile/robertdownen.bsky.social/post/3lwwntxygqc2w Thiel doing a neo-nazi thing. For people keeping score.
Here’s a blog post I found via HN:
Physics Grifters: Eric Weinstein, Sabine Hossenfelder, and a Crisis of Credibility
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Oh, man, I have opinions about the people in this story. But for now I’ll just comment on this bit:
Note that before this incident, the Malaney-Weinstein work received little attention due to its limited significance and impact. Despite this, Weinstein has suggested that it is worthy of a Nobel prize and claimed (with the support of Brian Keating) that it is “the most deep insight in mathematical economics of the last 25-50 years”. In that same podcast episode, Weinstein also makes the incendiary claim that Juan Maldacena stole such ideas from him and his wife.
The thing is, you can go and look up what Maldacena said about gauge theory and economics. He very obviously saw an article in the widely-read American Journal of Physics, which points back to prior work by K. N. Ilinski and others. And this thread goes back at least to a 1994 paper by Lane Hughston, i.e., years before Pia Malaney’s PhD thesis. I’ve read both; Hughston’s is more detailed and more clear.
DRAMATIS PERSONAE
- Michael Shermer: dry and limp writer, horribly dull public speaker, sex pest
- Sabine Hossenfelder: transphobe, endorser of sex pest Lawrence Krauss, on the subject of physics either incompetent or maliciously deceptive
- Eric Weinstein: Thielboy, he totally invented a Theory of Everything, for realsies, honest, but the dog ate his equations
- Curt Jaimungal: podcast bro who doesn’t even rate a Wikipedia article, but in searching for one we learn that he has platformed a Bell Curve stan
- Scott Aaronson: author of a blog named for a sex fantasy, he has the superpower of making people sympathize with a cop
- Chris Langan: racist, egomaniacal kook
I once randomly found Hossenfelder’s YT channel, it had a video about climate change and someone linked it somewhere, I didn’t know who she was. That video seemed fine, it correctly pointed out the urgency of the matter, and while I don’t know enough climate science to say much about the veracity of all its content, nothing stuck out as particularly weird to me. So I looked at some other videos from the channel… and boooooy did I quickly discover some serious conspiracy-style nonsense stuff. Real “the cabal of physicists are suppressing the truth” vibes, including “I got this email which I will read to you but I can’t tell you who it’s from, but it’s the ultimate proof” (both not quotes, just how I’d summarize the content…)
he has the superpower of making people sympathize with a cop
He’s second only to the average sovereign citizen in that field.
has anyone worked out who Hossenfelder’s new backer is yet
DRAMATIS PERSONAE
Belligerents
Author works on ML for DeepMind but doesn’t seem to be an out and out promptfondler.
Quote from this post:
I found myself in a prolonged discussion with Mark Bishop, who was quite pessimistic about the capabilities of large language models. Drawing on his expertise in theory of mind, he adamantly claimed that LLMs do not understand anything – at least not according to a proper interpretation of the word “understand”. While Mark has clearly spent much more time thinking about this issue than I have, I found his remarks overly dismissive, and we did not see eye-to-eye.
Based on this I’d say the author is LLM-pilled at least.
However, a fruitful outcome of our discussion was his suggestion that I read John Searle’s original Chinese Room argument paper. Though I was familiar with the argument from its prominence in scientific and philosophical circles, I had never read the paper myself. I’m glad to have now done so, and I can report that it has profoundly influenced my thinking – but the details of that will be for another debate or blog post.
Best case scenario is that the author comes around to the stochastic parrot model of LLMs.
E: also from that post, rearranged slightly for readability here. (the […]* parts are swapped in the original)
My debate panel this year was a fiery one, a stark contrast to the tame one I had in 2023. I was joined by Jane Teller and Yanis Varoufakis to discuss the role of technology in autonomy and privacy. [[I was] the lone voice from a large tech company.]* I was interrupted by Yanis in my opening remarks, with claps from the audience raining down to reinforce his dissenting message. It was a largely tech-fearful gathering, with the other panelists and audience members concerned about the data harvesting performed by Big Tech and their ability to influence our decision-making. […]* I was perpetually in defense mode and received none of the applause that the others did.
So also author is tech-brained and not “tech-fearful”.
So state-owned power company Vattenfall here in Sweden are gonna investigate building “small modular reactors” as a response to government’s planned buildout of nuclear.
Either Rolls-Royce or GE Vernova are in the running.
Note that this is entirely dependent on the government guaranteeing a certain level of revenue (“risk sharing”), and of course that that level survives an eventual new government.
Rolls-Royce are looking at this as a big sack with a “£” on the side.
Interesting wondering if they manage to come further in the process than our gov, which seems to restart the process every few years, and then either discovers nobody wants to do it (it being building bigger reactors, not the smaller ones, which iirc from a post here are not likely two work out) for a reasonable price, or the gov falls again over their lies about foreigners and we restart the whole voting cycle again. (It is getting really crazy, our fused green/labour party is now being called the dumbest stuff by the big rightwing liberal party (who are not openly far right, just courting it a lot)).
29 okt are our new elections. Lets see what the ratio between formation and actually ruling is going to be this time. (Last time it took 223 days for a cabinet to form, and from my calculations they ruled for only 336 days).
Nuclear has been a running sore in Swedish politics since the late 70s. Opposition to it represented the reaction to the classic employer-employee class detente in place since the 1930s where both the dominant Social Democrats and the opposition on the right were broadly in agreement that economic growth == good, and nuclear was a part of that. There was a referendum in the early 80s where the alternatives were classical Swedish: Yes, No, and “No, but we wait a few years”.
Decades have passed, and now being pro-nuclear is very right-coded, and while secretly the current Social Democrats are probably happy that we’re supposed to get more electrical power, there’s political hay to make opposing the racist shitheads. Add to that that financing this shit actually would mean more expensive electricity I doubt it will remain popular.
The Palladium/Bismarck Analysis e-magazine guys who push space colonization used to known as Phalanx back in the day, just an fyi in case you guys didn’t know.
Gary asks the doomers, are you “feeling the agi” now kids?
To which Daniel K, our favorite guru lets us know that he has officially
moved his goal postsupdated his timeline so now the robogod doesnt wipe us out until the year of our lorde 2029.It takes a big brain superforecaster to have to admit your four month old rapture prophecy was already off by at least 2 years omegalul
Also, love: updating towards my teammate (lmaou) who cowrote the manifesto but is now saying he never believed it. “The forecasts that don’t come true were just pranks bro, check my manifold score bro, im def capable of future sight, trust”
So, as I have been on a cult comparison kick lately, how did it work for those doomsday cults when the world didn’t end, and they picked a new date, did they become more radicalized or less? (I’m not sure myself, I’d assume it would be the people disappointed leave, and the rest get worse).
… prophecies, per se, almost never fail. They are instead component parts of a complex and interwoven belief system which tends to be very resilient to challenge from outsiders. While the rest of us might focus on the accuracy of an isolated claim as a test of a group’s legitimacy, those who are part of that group—and already accept its whole theology—may not be troubled by what seems to them like a minor mismatch. A few people might abandon the group, typically the newest or least-committed adherents, but the vast majority experience little cognitive dissonance and so make only minor adjustments to their beliefs. They carry on, often feeling more spiritually enriched as a result.
When Prophecy Fails is worth the read just for the narrative, he literally had his grad students join a UFO / Dianetics cult and take notes in the bathroom and kept it going for months. Really impressive amount of shoe leather compared to most modern psych research.
look at me, the thinking man, i update myself just like a computer beep boop beep boop
Clown world.
How many times will he need to revise his silly timeline before media figures like Kevin Roose stop treating him like some kind of respectable authority? Actually, I know the answer to that question. They’ll keep swallowing his garbage until the bubble finally bursts.
“Kevin Roose”? More like Kevin Rube, am I right? Holy shit, I actually am right.
And once it does they’ll quietly stop talking about it for a while to “focus on the human stories of those affected” or whatever until the nostalgic retrospectives can start along with the next thing.
deleted by creator