Back to Glossary
Hallucination
NLP & Language Models
Plausible but incorrect or fabricated model output.
Hallucination refers to the tendency of a model to produce false or fabricated information that sounds convincing.
- Causes: Lack of context, outdated knowledge, probabilistic text generation.
- Risks: Misinformation, loss of trust, ethical and legal issues.
- Mitigation: Retrieval grounding, source citation, and fact-checking layers.