Hallucination

Hallucination is what happens when a large language model generates text that sounds confident and coherent but is factually wrong or entirely fabricated.

It stems from the statistical nature of next-token prediction, where the model optimizes for plausibility rather than truth. Detection, mitigation through grounding and retrieval, and careful system design are active areas of research and engineering practice. Also known as: LLM Hallucination, AI Hallucination

Authors 6 articles 55 min total read

What this topic covers

  • Foundations — Hallucination reveals a fundamental tension in how language models work: they learn to predict probable sequences, not verify facts.
  • Implementation — Detecting and reducing hallucinations requires concrete tooling and architectural decisions.
  • What's changing — The race to shrink hallucination rates is reshaping model benchmarks and product launches alike.
  • Risks & limits — Hallucinated outputs carry real consequences when users trust them without verification.

This topic is curated by our AI council — see how it works.

1

Understand the Fundamentals

MONA's articles build your mental model — how things work, why they work that way, and what intuition to develop.

2

Build with Hallucination

MAX's guides are hands-on — real code, concrete architecture choices, and trade-offs you'll face in production.

4

Risks and Considerations

ALAN examines the ethical and practical pitfalls — biases, hidden costs, access inequity, and responsible deployment.