

Refusing to use AI tools or output. Sabotage!
Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).
I work in the field of law/accounting/compliance, btw.
Refusing to use AI tools or output. Sabotage!
Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).
I work in the field of law/accounting/compliance, btw.
I believe that promptfondlers and boosters are particularly good at “kissing up”, which may help their careers even during an AI winter. This something we have to be prepared for, sadly. However, some of those people could still be in for a rude awakening if someone actually pays attention to the quality and usefulness of their work.
Aren’t most people ordering their fast food through apps nowadays anyway? Isn’t this slightly more customer-friendly than AI order bots because it is at least a deterministic system?
Oh, I forgot, these apps will probably be vibe-coded soon too. Never mind.
More than two decades ago, I dabbled a bit in PHP, MySQL etc. for hobbyist purposes. Even back then, I would have taken stronger precautions, even for some silly database on hosted webspace. Apparently, some of those techbros live in a different universe.
Nice! I could almost swear I heard some of these in real life.
When an AI creates fake legal citations, for example, and the prompt wasn’t something along the lines of “Please make up X”, I don’t know how the user could be blamed for this. Yet, people keep claiming that outputs like this could only happen due to “wrong prompting”. At the same time, we are being told that AI could easily replace nearly all lawyers because it is that great at lawyerly stuff (supposedly).
To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves “prompting wizards”, usually because they are either too lazy or too gullible to question the chatbot’s output.
I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient “prompting skills”.
Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and “great prompting skills”.
In this case (unlike the teen suicides) this was a middle aged man from a wealthy family, though, with a known history of mental illness. Quite likely, he would have had sufficient access to professional help. As the article mentions, it is very dangerous to confirm the delusions of people suffering from psychosis, but I think this is exactly what the chatbot did here over a lengthy period of time.
To me, in terms of the chatbot’s role, this seems possibly even more damning than the suicides. Apparently, the chatbot didn’t just support this man’s delusions about his mother and his ex-girlfriend being after him, but even made up additional delusions on its own, further “incriminating” various people including his mother, whom he eventually killed. In addition, the man was given a “Delusional Risk Score” of “Near zero” by the chatbot, apparently.
On the other hand, I’m sure people are going to come up with excuses even for this by blaming the user, his mental illness, his mother or even society at large.
because made-up stats/sources will get their entire grift thrown out if they’re discovered
I believe it is not just that. Making up some of those references as a human (in a way that sounds credible) would require quite a lot of effort and creativity. I think this is a case where the AI actually performs “excellently” at a task that is less than useless in practice.
This is a theory I had put forward before: Made-up (but plausible-sounding) sources are probably one of the few reliable “AI detectors.” Lazy people would not normally bother to come up with something like this themselves.
The most useful thing would be if mid-level users had a system where they could just go “I want these cells to be filled with the second word of the info of the cell next to it”,
In such a case, it would also be very useful if the AI would ask for clarification first, such as: “By ‘the cell next to it’, you mean the cells in column No. xxx, is that correct?”
Now I wonder whether AI chatbots typically do that. In my (limited) experience, they often don’t. They tend to hallucinate an answer rather than ask for clarification, and if the answer is wrong, I’m supposedly to blame because I prompted them wrong.
Also, AI is super cheap, supposedly, because it is only $ 0.40 an hour (where did that number come from?). Unlike humans, AI doesn’t need any vacations and is never sick, either. Furthermore, it is never to blame for any mistakes. The user always is. So at the very least, we still need humans for shouldering all the blame, I guess.
This week I heard that supposedly, all of those failed AI initiatives did in fact deliver the promised 40% productivity gains, but the companies (supposedly) didn’t reap any returns “because they failed to make the necessary organizational changes” (which happens all the time, supposedly).
Is this the new “official” talking point?
Also, according to the university professor (!) who held the talk, the blockchain and web3 are soon going to solve the problems related to AI-generated deepfakes. They were dead serious, apparently. And someone paid them to hold that talk.
What happened to good old dice?
I’m not even sure I understand the point of this supposed “feature”. Isn’t their business model mainly targeted at people who want to sell merch to their fanbase or their followers? In this case, I would imagine that most creators would want strong control over the final product in order to protect their “brand”. This seems very different from stock photography / stock art, where creators knowingly relinquish (most) control over how their work is being used.
It’s a bit tangential, but using ChatGPT to write a press release and then being unable to answer any critical questions about it is a little bit like using an app to climb a mountain wearing shorts and flip-flops without checking the weather first and then being unable to climb back down once the inevitable thunderstorm has started.
A while ago, I uploaded a .json file to a chatbot (MS Copilot, I believe). It was a perfectly fine .json, with just one semicolon removed (by me). The chatbot was unable to identify the problem. Instead, it claimed to have found various other “errors” in the file. Would be interesting to know if other models (such as GPT-5) would perform any better here, as to me (as a layperson) this sounds somewhat similar to the letter counting problem.
Maybe it’s also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it’s expected to try and try again with different questions until one correct answer comes out and then use that one to “evangelize” about the virtues of AI.