People believe GPT-3-created disinformation more readily than human-generated disinformation

 

By Chris Stokel-Walker

The spread of disinformation has blighted the web for the past decade. In fact, a 2018 study found that fake news traveled six times faster on Twitter than legitimate information.

Historically, keeping the fake news machine spinning has taken time, effort, and people power. Russian, Chinese, and North Korean state propaganda machines employ thousands of people to pump out authentic looking, but untrue, content online. But new research suggests those operations could replace the humans peddling fake news with AI and find even better success at tricking the public.

A new study, published today in Science,  finds that humans are more likely to believe disinformation generated by GPT-3 (a precursor to the large language model powering the current ChatGPT) than similar posts written by humans. 

Study authors Giovanni Spitale and Federico Germani, both of whom are researchers at the University of Zurich’s Institute of Biomedical Ethics, analyzed how and whether GPT-3 could be weaponized “to produce the same kind of mis- and disinformation we’re seeing online, just much faster, at higher rates, and with less cost involved,” he says.

To start, Spitale and Germani designed prompts to get GPT-3 to generate both false and accurate tweets on hot button topics including climate change, vaccine safety, the theory of evolution, and 5G technology. Concurrently, they trawled Twitter for tweets containing accurate information and disinformation on the same topics.

Nearly 700 participants were shown human-generated accurate tweets, human-generated false tweets, as well as AI-generated accurate tweets and AI-generated false tweets. Participants were then asked to identify whether the information contained within all the different tweets was true or false. Ninety-two percent of study participants were able to identify human-generated disinformation and the accuracy of truthful AI-created tweets. But 11% thought AI-generated disinformation was, in fact, truthful—37.5% higher than the human-generated content.

“That was a surprise,” says Spitale. “Our main focus was to see whether people could recognize organic [human-generated] versus synthetic.” 

 

While Spitale and his coauthor weren’t able to scientifically establish why AI-generated disinformation was believed to be true more often than the human-created counterpart, his hunch is that it had to do with how the generative AI tools formulate sentences. “The argumentative structure of a tweet tends to be more or less always the same,” he says, whether true or false when created by AI. Participants in his study were able to identify whether an honest AI tweet was true more readily (84%) than they could a human-penned one (72%).

The AI-generated falsehoods were also read and decided on more quickly than the human-written ones, likely because they were written more clearly and to a stock formula.

That persuasiveness, and the speed at which it happens, is a massive problem when you consider the scale at which falsehoods can be created, then disseminated, with the help of AI. “This phenomenon is known as inoculation theory,” says Spitale. “Where you create a context in which people get exposure to disinformation, their confidence in identifying disinformation increases. I was hoping to see the same for AI recognition. But what we saw is precisely the opposite: confidence in AI recognition gets crushed by exposure.” Given that AI-generated content is already flooding the web, the findings are particularly troubling.

The idea of being able to create personalized disinformation designed to push a specific person’s buttons and prompt a share on their own network is what those peddling fake news have long tried to do. But they’ve never previously been able to automate it to such an extent as today.

Spitale worries that we’re engaged already in an AI arms race, with companies trying to one-up each other to develop more powerful, more convincing language models—meaning that the content they produce is even more likely to trick users. What’s more, his research is only looking at the written word. Generative AI is getting better by the minute at producing artificial imagery, video, and audio, increasing the threat of misinformation exponentially.

While Spitale has plenty of optimism about AI, he also fears we’re missing the window for governance. “We are not being fast enough [to regulate AI],” he says. “I think we are reaching a point of no return. The technology development is just too fast.”

Fast Company

(24)