When AI generates confident but false information
AI hallucination occurs when a large language model generates plausible-sounding but factually incorrect or entirely fabricated information with apparent confidence. It stems from the model predicting likely text rather than retrieving verified facts.
A hallucinating AI might confidently cite a research paper that doesn't exist, complete with a fake author, journal, and publication date.