- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
“Training chatbots to engage with people and keep them coming back presented risks,” OpenAI’s former policy researcher Gretchen Krueger told the New York Times, adding that some harm to users “was not only foreseeable, it was foreseen.” Krueger left the company in the spring of 2024.
The concerns center mostly around a clash between OpenAI’s mission to increase daily chatbot users as an official for-profit, and its founding vision of a future where safe AI benefits humanity, one that it promised to follow as a former nonprofit
emphasis mine lol smh.
Simply put, they don’t see the harm they’re causing as anything more than the cost of doing business. They’ll keep doing this because there are no meaningful consequences. Nobody has held them to account. I have little faith that the wrongful death suit penalties will be anything more than a rounding error to them, as per usual.
On the other hand, if that was a conversation a person was having with that 16yo kid, they’d already be sentenced to at least 20y.
If openai is having a mental health crisis, maybe they should talk to chatgpt about it.
“pAI” all the artists you stole from, fuckface.


