Hallucinations are inevitable in LLMs because they cannot learn all computable functions according to learning theory.
The Complexity of Theorem-Proving Procedures
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2024 1verdicts
CONDITIONAL 1representative citing papers
citing papers explorer
-
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Hallucinations are inevitable in LLMs because they cannot learn all computable functions according to learning theory.