Document Type
Article
Publication Date
2025
Abstract
Over the last millennium, defamation law has adapted to many new information technologies, including the printing press, the telegraph, and the internet. Now, defamation law must adapt to the challenges presented by generative artificial intelligence, and specifically the propensity of Large Language Models to produce defamatory hallucinations. In this article, we unite the lessons of legal history with cutting-edge computer science research in developing a legal framework for addressing defamatory hallucinations produced by AI reasoning models. This article breaks new ground by recognizing both the inevitability and even desirability (in some instances) of AI hallucinations. We argue that defamation law must carve out “breathing space” for hallucinations, just as it treats certain human failings as “inevitable errors” to achieve sound communications policy. We further contend that LLM producers should be subject to a duty to warn users of the prevalence and risk of hallucinations as well as a duty to keep search records for a limited time to assist plaintiffs in proving reputational harm. Once AI producers comply with these obligations, the common law should treat them as information distributors. Users who negligently spread hallucinated and defamatory falsehoods should be treated like incompetent or unscrupulous journalists passing along defamation from an unreliable source.
Recommended Citation
Lyrissa Lidsky & Andrew Daves, Inevitable Errors: Defamation by Hallucination in AI Reasoning Models, 6 J. Free Speech L. 477 (2025).