Lot of angles here. My first reaction is another set of parents blaming tech for their lack of involvement and their sons personal actions.
I also think too the personification of chat bots at all levels is kind evil. Some people are just really suseptable to that and the bot training and AI field itself encourages this miss-conception.
There are lot of other issues with these bots too. I wonder how many people are really up to using them properly with the appropriate skepticism or if our society is ready for all of the missuse.
I think you’re right here. I use them for work, and I’m fully aware of their limitations and know when I can use an answer and when I see garbage. However, me being here on Lemmy shows that we are not the average, and the vast majority of people takes it as the gods honest truth.
I’ll be blunt. I think the parents are blaming the AI platform unfairly. There’s not an excuse for lack of parental oversight and monitoring of the teen’s mood.
On the other hand; I don’t think it’s exactly desirable for the AI to help you jail break it. That should probably be fixed.
AI is not capable of doing wrong or evil. It is a tool; as a hammer or a notepad is.
AI is not capable of doing wrong or evil. It is a tool; as a hammer or a notepad is.
A tool does exactly what you do with it. A hammer can pound nails or break skulls but it’s always the person behind the tool who causes the action. Generative AI is not like that at all. If it’s a tool, you aren’t necessarily able to control what it does under your direction.
If it’s a tool, you aren’t necessarily able to control what it does under your direction.
This is false. A tool, by definition, is controlled by the user of said tool. AI is controlled by user input. Any AI that cannot be controlled by said input is said to be “misaligned” and is considered a broken tool. OpenAI lays out clearly what it’s AI is trained to do and not do. It is not responsible if you use the tool they created in a way that is not recommended.
Any AI prompt fits the definition of a tool:
From Merriam-Webster:
2b: an element of a computer program (such as a graphics application) that activates and controls a particular function
In my opinion; the AI should not be equipped to bypass it’s guardrails even when prompted to do so. A hammer did not tell you to use it as a drill; it’s user decided to do that.
The user alone has the creativity to use the tool to achieve their goal.
AI reads your input and guesses what to output. It’s just really good at that. It has no concept on the actual meaning of those words and how they will be interpreted.
The family is blaming AI. But the family failed to realize anything wrong with their son and the son didn’t feel safe enough to discuss anything with his parents. Lots of blame could be thrown around instead of addressing the larger issue. Which is that the mental health system is beyond broken and somehow talking about mental health is still taboo.
As for jail breaking AI…I don’t think a private corporation should have any input in what I say, believe or think. The hammer manufacturers can’t stop me from using it as a drill. This whole argument goes back to the old who do you blame question…the gun manufactures, the gun stores or the murderers with the guns.
AI reads your input and guesses what to output. It’s just really good at that. It has no concept on the actual meaning of those words and how they will be interpreted.
Yep. AI is a tool. The user is still responsible for the right-ness or wrong-ness of how they choose to use it.
Lots of blame could be thrown around instead of addressing the larger issue.
You’re moving the goalposts here to absolve the parents’ lack of care. That isn’t right.
As for jail breaking AI…I don’t think a private corporation should have any input in what I say, believe or think. The hammer manufacturers can’t stop me from using it as a drill. This whole argument goes back to the old who do you blame question…the gun manufactures, the gun stores or the murderers with the guns.
Oh look, more goalpost movement and even reframing my argument, which was simple: The AI should not assist the user in jailbreaking itself.
Seriously; do not reply again. Your arguments did not work.
So strong. I’ll reply to whatever I want. You aren’t my mom.
Why is it always when chatbots roleplay as Game of Thrones characters
I haven’t watched the series but everything I know is that if you can remember the name of a character, they probably die within the next few episodes.