Hallucination

What is Hallucination?

A phenomenon where a large language model generates text that is factually incorrect, nonsensical, or disconnected from the provided source material, yet presents it with a high degree of confidence. Hallucinations occur because these models are probabilistic systems designed to predict the most likely next word, not to access a knowledge base of truth. They can be caused by biases in training data, a lack of real-world grounding, or misinterpretation of the user's prompt.

Where did the term "Hallucination" come from?

The term was borrowed from neuropsychology to describe the seemingly confident and articulate but factually incorrect outputs of AI. It gained widespread usage with the rise of large language models (LLMs) and their deployment in public-facing applications, which made the issue more prominent.

How is "Hallucination" used today?

Hallucinations are a fundamental challenge in AI and a major barrier to the reliable deployment of LLMs in critical applications. Significant research is focused on mitigation techniques, including Retrieval-Augmented Generation (RAG) to ground models in verifiable data, improved prompting strategies, and developing models that can express uncertainty.

Related Terms