Hallucination
A hallucination is a confident-sounding output from a generative AI model that is not grounded in any source and is factually wrong.
What is Hallucination?
Hallucinations arise because language models generate the most plausible continuation, not the most truthful one. Common causes include missing context, leading prompts, training-data gaps, and over-extrapolation. Mitigations include retrieval-augmented generation, grounding requirements, citation enforcement, schema-constrained outputs, retrieval-then-validate patterns, and human review on high-stakes paths. Hallucinations cannot be fully eliminated; they can be substantially reduced and made detectable.
How does Hallucination apply to enterprise AI?
In regulated workflows like legal, healthcare, and financial advice, an unmitigated hallucination is a compliance event, not just a quality issue. Enterprise AI systems must be designed assuming the model will sometimes be wrong.
Related terms
- Large Language Model - A Large Language Model (LLM) is a foundation model trained on text to predict the next token, capable of generating, summarising, and reasoning over natural language.
- RAG (Retrieval-Augmented Generation) - Retrieval-Augmented Generation (RAG) is an architecture pattern that grounds a language model's output in retrieved source documents rather than relying on the model's parametric memory alone.
- Guardrails - Guardrails are runtime checks placed around an AI system to constrain inputs, outputs, and tool calls within safety, compliance, and business policy.
External references
Need help applying Hallucination to your enterprise? Submit a short brief and we reply within one business day.