

Modern move money between pockets for profit economics seem to give The Hitchhiker’s Guide bistromathics a run for their money.
It’s not always easy to distinguish between existentialism and a bad mood.
Modern move money between pockets for profit economics seem to give The Hitchhiker’s Guide bistromathics a run for their money.
I wonder what this means for US GDP
Don’t worry, unchecked inflation and increasing housing costs will keep the GDP propped up at least for a while longer.
Zitron taking every opportunity to shit on Scott’s AI2027 is kind of cathartic, ngl
He has capital L Lawfulness concerns. About the parent and the child being asymmetrically skilled in context engineering. Which apparently is the main reason kids shouldn’t trust LLM output.
Him showing his ass with the memory comment is just a bonus.
I feel dumber for having read that, and not in the intellectually humbled way.
This hits differently over the recent news that ChatGPT encouraged and aided a teen suicide.
Kelsey Piper xhitted: Never thought I’d become a ‘take you relationship problems to ChatGPT’ person but when the 8yo and I have an argument it actually works really well to mutually agree on an account of events for Claude and the ask for its opinion
I think she considers the AIs far more knowledgeable than me about reasonable human behavior so if I say something that’s no reason to think it’s true but if Claude says it then it at least merits serious consideration
Not who you asked, but both python and javascript have code smell as a core language feature and we are stuck with them by accident of history, not because anyone in particular thought it would be such a great idea for them to overshoot their original purpose to such a comical degree.
Also there’s a long history of languages meant to be used as an introduction to coding being spun off into ridiculously verbose enterprise equivalents that then everyone had to deal with (see delphi and visual basic) so there’s certainly a case for refusing to cede any more ground to dollar store editions of useful stuff under the guise of education.
AI innovation in this space usually means automatically adding stuff to the model’s context.
It probably started meaning the (failed) build output got added in every iteration, but it’s entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.
When I was at computer toucher school at about the start of the century, under the moniker AI were taught (I think) fuzzy logic, incremental optimization and graph algorithms, and neural networks.
AI is a sci-fi trope far more than it ever was a well-defined research topic.
Anyone who said this about their product would almost certainly by lying, but these guys are extra lying.
For sure, blockchain based agentic LLM that learns as it goes is sounds like someone describing a flying elephant wearing an inflatable life jacket.
Nobody’s using datasets made of copyrighted literature and 4chan to teach robots how to move, what are you even on about.
@dgerard is never going to run out of content for pivot, is he:
“AI is obviously gonna one-shot the human limbic system,” referring to the part of the brain responsible for human emotions. “That said, I predict — counter-intuitively — that it will increase the birth rate!” he continued without explanation. “Mark my words. Also, we’re gonna program it that way.”
Risk checks for financial services: $1M saved annually on outsourced risk management
Since I doubt they had time to use the tools for a full year, this is probably just the month they saved ~85K$ from firing/ending partnership with humans involved in risk assessment multiplied by twelve.
In the long run I’m betting that exclusively using software that not only can’t do basic math but actually treats numbers as words for risk assessment isn’t going to be a net positive for their bottom line, especially if it their customers also get it in their heads that they could ditch the middleman and directly use a chatbot themselves.
Meanwhile on /r/programmingcirclejerk sneering hn:
OP: We keep talking about “AI replacing coders,” but the real shift might be that coding itself stops looking like coding. If prompts become the de facto way to create applications/developing systems in the future, maybe programming languages will just be baggage we’ll need to unlearn.
Comment: The future of coding is jerking off while waiting for AI managers to do your project for you, then retrying the prompt when they get it wrong. If gooning becomes the de facto way to program, maybe expecting to cum will be baggage we’ll need to unlearn.
That’s about what I was thinking, I’m completely ok with the weird rpg aspect.
Regarding the second and third point though I’ll admit I thought the whole thing was just yud indulging, I missed that it’s also explicitly meant as rationalist esoterica.
Is the scoop that besides being an EA mouthpiece KP is also into the weird stuff?
Overall more interesting than I expected. On the Leverage Research cult:
Routine tasks, such as deciding whose turn it was to pick up the groceries, required working around other people’s beliefs in demons, magic, and other paranormal phenomena. Eventually these beliefs collided with preexisting social conflict, and Leverage broke apart into factions that fought with each other internally through occult rituals.
The people at Zitron’s subreddit have thoughts
I mean if anything all he proved that DeepSeek-R1 could be profitable
if any of this was true these companies would actually be making money