I saw this on reddit, thinking it’s a joke, but it’s not. GPT 5 mini (and maybe sometimes 5) gives this answer. You can check it yourself, or see somebody else’s similar conversation here: https://chatgpt.com/share/689a4f6f-18a4-8013-bc2e-84d11c763a99
You don’t understand how AI works under the hood. It can’t tell you it’s lying, because it doesn’t know the concept lying. In fact - it doesn’t know ANYTHING, literally. It’s not thinking, it’s predicting. It’s speculating what the viable answer would look like based on his dataset.
You don’t actually get real answers to your questions - you only get a text that the AI determined may seem most fitting to your prompt.
I understand how my comment was unclear, but I was attempting to underscore the fact that it cannot determine the difference. That’s why I included the whole statistically plausible bit. My point is that AI as it currently functions is fundamentally flawed for several use cases because it cannot operate as it “should”. It just says things with no ability to determine the veracity.
The first portion of my comment was addressing the suggestion that there was a “reason” to lie. My point is there is no good justification to providing factually incorrect answers, and currently there is no way to stop AI from doing so. Hope that clears things up.
Correct.
Therefore selling it as something capable of reliably correctly answering questions is a criminal scam.