Hallucinations happen because LLMs predict statistically likely text rather than retrieve facts. Without grounding (RAG, search), they make up plausible-sounding answers — wrong dates, fake citations, invented policies.
Types: factual hallucinations (wrong info), source hallucinations (fake citations), capability hallucinations (claiming abilities they don't have), and contextual hallucinations (extrapolating from unrelated context).
Mitigation: RAG (ground in real data), prompt engineering ('only answer from provided context'), explicit uncertainty signals ('say I don't know if uncertain'), citations to sources, and human review for high-stakes outputs.