Georgia is the test case in deciding whether chatbots can be guilty of defamation

 

By Mark Sullivan

A nationally syndicated talk show host has sued OpenAI, claiming that the company’s ChatGPT tool generated false and harmful information about him embezzling money. Mark Walters, who hosts a pro-gun podcast on Armed American Radio, is seeking unspecified monetary damages from the generative AI tool developer. The case filed in a Georgia state Superior Court may be the first defamation action against an AI chatbot.

Walters contends that a journalist who was doing research on him received false and libelous information from OpenAI’s ChatGPT. According to the suit, Fred Riehl, editor of the gun publication AmmoLand, had asked ChatGPT for information on Walters’s role in another, unrelated, lawsuit in Washington state. The chatbot contrived a fictional part of the Washington lawsuit, saying that Walters had embezzled money from a nonprofit group for which he’d served as a financial officer. Walters says OpenAI is liable for the chatbot’s false output.

“OAI knew or should have known its communication to Riehl regarding Walters was false, or recklessly disregarded the falsity of the communication,” the Walters suit reads.

Like virtually every chatbot, the ChatGPT tool prominently displays a disclaimer saying, in part, “The system may occasionally generate incorrect or misleading information.” This would seem to weaken Walters’s case, as does the fact that Riehl apparently didn’t publish the erroneous output as fact.

The Walters v. OpenAI suit may or may not prove to be a landmark test case for defamation-by-AI, but it likely will raise important legal questions that will be repeated in future defamation cases involving generative AI tools. 

There’s plenty of precedent for cases in which humans defame humans, but precious little when an AI is said to have caused the harm. “Defamation is kind of a new area,” says John Villafranco, a partner with the law firm Kelley Drye & Warren. “There are a lot of juicy issues to be worked out.”

Indeed. Can an AI defame a human? Can a chatbot “knowingly” or “negligently” libel someone? Is a chatbot “publishing” when it generates results for a user? Is the disclaimer statement enough to protect a chatbot from all defamation claims? 

 

The courts, Villafranco says, are just beginning to develop case law around generative AI in legal areas such as privacy, deceptive advertising, and defamation. The Federal Trade Commission, he says, has signaled a willingness to go after companies that falsely advertise or otherwise mislead consumers through their use of generative AI. In such cases, a consumer plaintiff needs to prove only that a piece of content (perhaps an AI-generated ad or information dispensed by a customer service bot) is false or misleading. But in defamation cases involving AI, Villafranco points out, the plaintiff must prove that the AI generated a falsehood, and that the falsehood harmed the plaintiff directly. 

ChatGPT has been accused of defamation at least one other time. In April, Brian Hood, the mayor of Hepburn Shire in Australia, threatened to sue OpenAI after finding out that ChatGPT generated text saying he’d been convicted of bribery. 

In another incident, two lawyers in New York are facing sanctions after filing a brief full of fake legal opinions and citations, all hallucinated by ChatGPT, in a Federal District Court. “I did not comprehend that ChatGPT could fabricate cases,” one of the lawyers, Steven A. Schwartz, said during a hearing on the matter last week.

Fast Company

(13)