As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.
by truthful, i meant generating truthful new knowledge, not just performing calculations that we implemented and know well. I agree that i could have phrased this better…
If you’re trying to dance around and say that you were saying “AI would do that” instead of LLMs…1) why? the AI=LLMs train has left the station and 2) are you lost? Because the article link you’re replying under is talking about LLMs
Yeah we all know that calculators are completely useless. 🙄
by truthful, i meant generating truthful new knowledge, not just performing calculations that we implemented and know well. I agree that i could have phrased this better…
It’s amazing that you managed to try to pretend this thing will do what it cannot do.
AI in general? Sure, maybe at some point.
LLMs? Nope. Sorry. They’re basically an echo of sorts.
(As, you know, the study you’re posting under is showing.)
What is it im pretending that LLMs can do? I think you may be misreading me.
Did i say LLMs do generate truthful new knowledge? Of course it doesnt do that
Dude, I quoted you.
If you’re trying to dance around and say that you were saying “AI would do that” instead of LLMs…1) why? the AI=LLMs train has left the station and 2) are you lost? Because the article link you’re replying under is talking about LLMs