

Nobody wants to join a cult founded on the Daria/Hellraiser crossover I wrote while emotionally processing chronic pain. I feel very mid-status.
Nobody wants to join a cult founded on the Daria/Hellraiser crossover I wrote while emotionally processing chronic pain. I feel very mid-status.
I found this because Greg Egan shared it elsewhere on fedi:
I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.
Hey, I haven’t seen this going around yet, but itchio is also taking books down with no erotic content that are just labeled as lgbtqia+
So that’s super cool and totally not what I thought they were going to do next 🙃
https://bsky.app/profile/marsadler.bsky.social/post/3luov7rkles2u
And a relevant petition from the ACLU:
https://action.aclu.org/petition/mastercard-sex-work-work-end-your-unjust-policy
It’s “general intelligence”, the eugenicist wet dream of a supposedly quantitative measure of how the better class of humans do brain good.
From Yud’s remarks on Xitter:
As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn’t actually true that you can just saunter in as a psychotic IQ 80 person and do that.
Well, not with that attitude.
You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;
If “wearing masks” really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think ™.
you must outperform other people also trying to do that, who’d like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.
zoom and enhance
g-factor
<Kill Bill sirens.gif>
“This is not good news about which sort of humans ChatGPT can eat,” mused Yudkowsky. “Yes yes, I’m sure the guy was atypically susceptible for a $2 billion fund manager,” he continued. “It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them.”
Is this “narrative” in the room with us right now?
It’s reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.
Here’s their page of instructions, written as usual by the children who really liked programming the family VCR:
It also gave Allie Brosh cancer:
https://bsky.app/profile/erinabanks.bsky.social/post/3ltxeyn4wtc23
Want to feel depressed? Over 2,000 Wikipedia articles, on topics from Morocco to Natalie Portman to Sinn Féin, are corrupted by ChatGPT. And that’s just the obvious ones.
https://xcancel.com/jasonlk/status/1946069562723897802
Vibe Coding Day 8,
I’m not even out of bed yet and I’m already planning my day on @Replit.
Today is AI Day, to really add AI to our algo.
[…]
If @Replit deleted my database between my last session and now there will be hell to pay
I had to attend a presentation from one of these guys, trying to tell a room full of journalists that LLMs could replace us & we needed to adapt by using it and I couldn’t stop thinking that an LLM could never be a trans journalist, but it could probably replace the guy giving the presentation.
After understanding a lot of things it’s clear that it didn’t. And it fooled me for two weeks.
I have learned my lesson and now I am using it to generate one page at a time.
qu1j0t3 replies:
that’s, uh, not really the ideal takeaway from this lesson
We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already
Fucking ghastly equivocation. Not just between “LLMs” and “machine learning”, but between opening a website that has a chatbot icon I never click and actually wasting my time asking questions to the slop machine.
I like how quoting Grimes lyrics makes the banality of these people thuddingly clear.
Yud:
ChatGPT has already broken marriages, and hot AI girls are on track to remove a lot of men from the mating pool.
And suddenly I realized that I never want to hear a Rationalist say the words “mating pool”.
(I fired up xcancel to see if any of the usual suspects were saying anything eminently sneerable. Yudkowsky is re-xitting Hanania and some random guy who believes in g. Maybe he should see if the Pioneer Fund will bankroll publicity for his new book…)
The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads.
More recently, in the comments:
After reading your comments and @Jiro 's below, and discussing with LLMs on various settings, I think I was too strong in saying…
It’s like watching people volunteer for a lobotomy.
Kaneda just scooting to the side at the 14:05 mark like a Looney Tunes character caught with their pants down is comedy gold. I want to loop it with a MIDI rendition of Joplin’s “The Entertainer”.
The top comment begins thusly:
I think you make a reasonably compelling case, but when I think about the practicality of this in my own life it’s pretty hard to imagine not spending any time talking to chatbots. ChatGPT, Claude and others are extremely useful.
I didn’t think it was possible, but the perfection continues! Still, no notes!
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
womp, hold on let me finish, womp
Some archive links here: https://mastodon.art/@indieDevCurator/114909188230349545