Document Type
Article
Publication Date
2023
Abstract
Careless speech has always existed on a very large scale. When people talk, they often give bad advice or wrong information, and occasionally this leads the listener to act in a way that causes physical harm. The scale was made more visible by the public Internet as the musings and conversations of billions of participants became accessible and searchable to all. This dynamic produced a set of tort and free speech principles that we have debated and adjusted to over the last three decades.
AI speech systems bring a new dynamic. Unlike the disaggregated production of misinformation in the Internet era, much of the production will be centralized and supplied by a small number of deep pocket, attractive defendants (namely, OpenAI, Microsoft, Google, and other producers of sophisticated conversational AI programs). When should these companies be held liable for negligent speech produced by their programs? And how should the existence of these programs affect liability between other individuals?
This essay begins to work out the options that courts or legislatures will have. I will explore a few hypotheticals that are likely to arise frequently, and then plot out the analogies that courts may make to existing liability rules. The essay focuses on duty—that is, whether under traditional tort principles. Historically, duty rules have accommodated and absorbed First Amendment principles when the alleged act of negligence is pure expression. I consider hypotheticals and likely judicial responses to them in three clusters: (A) cases where the AI gives misinformation leading the user to harm herself; (B) cases where the AI gives misinformation leading the user to harm a third party (via the user’s conduct); and (C) cases where the user does not use AI, and if they had, it would have supplied useful information to avert physical harm.
In the end, I conclude that duty rules, if not modified for the AI context, could wind up missing the mark for optimal deterrence. They can be too broad, too narrow, or both at the same time, depending on how courts decide to draw their analogies.
Recommended Citation
Jane Bambauer, Negligent AI Speech: Some Thoughts about Duty, 3 J. Free Speech L. 343 (2023).
Included in
Computer Law Commons, Consumer Protection Law Commons, Internet Law Commons, Torts Commons