I know it’s the minority opinion around here. But, I think AI companies are maybe not quite so good.
The fact this is can even be a sentence someone thought to utter is such a triumph of wealth over reality.
When you have a product that you know can and will be used harmfully, you can’t just say “but if you use it harmfully, we’re not responsible”.
OpenAI is undeniably responsible for deaths they facilitated, like this one.
I am not disagreeing, but you could say the same thing about knifes.
Knives aren’t “intelligent”.
Knives don’t know their own terms of service and can have a means of preventing usage which breaks them.
Knives aren’t a service, but a product.
You could not say the same thing about knives.
Of course the company that acknowledges that it’s technology is used for emotional and psychological support is going to blame those who use it for such purposes. Plus falling back on the ToS means either they don’t know how to prevent such outcomes or they don’t want to.
Think it’s a little bit of both. They benefit greatly from people being addicted to their product, and “fixing” a neural network is fucking hard.
So I can just sell bombs freely, if I state that they can’t be used for exploding in the TOS. Got it. You’ll get a free sample, sam.
They… what?

I’ve seen this song-and-dance routine before. Big Tobacco. Big Pharma. Big Gun. It’s always victim-blaming with these companies. Always.
My opinion of them could not have gotten any lower, yet somehow with these latest developments, it has.
Well… it keeps working, so why would they do anything else?
OK…and whose fault is that? 😂
… All of us? That’s like a societal problem. In the most abstract sense, bad people do bad things for personal benefit and are rewarded. Are you proposing a solution to it?
Well the first and most obvious answer is that LLMs need to fall under an extensive regulatory framework which makes quite a number of use cases of them effectively illegal and still other use cases moderated by science-backed harm mitigation. There also need to be systemic corrections to the financial markets & business law such that a company like OpenAI in its recent or present form couldn’t exist at all.
But unfortunately, that’s not the world we live in (at least in America). Future generations will pay for our gross negligence, once again.
I’m not a native speaker, so sometimes I use AI to grammar check me to make sure I’m not talking nonsense, and just the other day I wanted to make a joke about waterboarding and asked AI to check it, it said it couldn’t do it because it involved torture, then I said it was for a fictional work and it did check - basically what the boy did.
Honestly, the whole thing reads like shitty parents are trying to find someone else to blame.Well, that makes it all better.








