The internet loves ChatGPT, but there’s a dark side to the tech

By Chris Stokel-Walker

December 06, 2022

Amid the usual doom and gloom that surrounds the internet these days, the world experienced an all-too-rare moment of joy over the past week: the arrival of a new artificial intelligence chatbot, ChatGPT.

The AI-powered chat tool, which takes pretty much any prompt a user throws at it and produces what they want, whether code or text, was launched by the team at AI development company OpenAI on November 30; by December 5, more than one million users had tested it out. The AI model comes hot on the heels of other generative AI technologies that take text prompts and spit out polished work that has swept social media in recent months—but its Jack-of-all-trades ability makes it stand out among the crowd.  

The chatbot is free to use, though OpenAI CEO Sam Altman expects that will change in the future—meaning users have embraced the tech wholeheartedly. People have been using ChatGPT to run a virtual Linux machine, answer coding queries, develop business plans, write song lyrics, even pen Shakespearean verses.

Yet for all the brouhaha, there are some important caveats to note. The system may seem too good to be true, in part because at times it is. While some have professed that there’s no need to learn to code because ChatGPT can do it for you, software bug site Stack Overflow has temporarily banned any responses to questions generated by the chatbot because of the poor quality of its answers. “The posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the site’s moderators say.

It’s also plagued by the same issues many chatbots have: It reflects society and all the incorrect biases society has. Computational scientist Steven T. Piantadosi, who heads the computation and language lab at UC Berkeley, has highlighted in a Twitter thread a number of issues with ChatGPT, where the AI turns up results that suggest “good scientists” are those who are white or Asian men, and that African American men’s lives should not be saved. Another query prompted ChatGPT to indulge in the idea that brain sizes differ and, as such, are more or less valuable as people.

OpenAI did not respond to a request for comment for this story. Altman, in response to Piantadosi’s Twitter thread highlighting serious incidents of his chatbot promoting racist beliefs, asked the computational scientist to “please hit the thumbs down on these and help us improve!”

“With these kind of chatbot models, if you search for certain toxic offensive queries, you’re likely to get toxic responses,” says Yang Zhang, faculty member at CISPA Helmholtz Center for Information Security, who was coauthor of a September 2022 paper looking at how chatbots, not including ChatGPT, turn nasty. “More importantly, if you search some innocent questions that aren’t that toxic, there’s still a chance that it will give a toxic response.”

The reason is the same that nobbles every chatbot: The data it uses to generate its responses are sourced from the internet, and folks online are plenty hostile. Zhang says that any chatbot developers ought to produce the worst-case scenario they can think of for their models as part of the development process, and then use that scenario to propose defense mechanisms to make the model safer. (A ChatGPT FAQ says: “We’ve made efforts to make the model refuse inappropriate requests.”) “We should also make the public aware that such models have a potential risk factor,” says Zhang.

The issue is that people often get caught up in incredulity at the prowess of the models’ output. ChatGPT appears to be streaks ahead of its competitors, with some already saying that it’s the death not just of Google’s chat models, but also of the search engine itself, so accurate are the model’s answers to some questions.

How the model has been trained is another conundrum, says Catalina Goanta, associate professor in private law and technology at Utrecht University. “Because of the very big computational power of these models, and the fact that they rely on all of this data that we cannot map; of course, a lot of ethical questions arise,” she says. The challenge is acknowledging the benefits that come from such powerful AI-powered chatbots while also ensuring there are sensible guide rails on its development.

That’s something, in the first flourish of social media hype, that it’s difficult to think about. But it’s important to do so. “I think we need to do more research to understand what are the case studies where it should be fair game to use such very large language models, as is the case with ChatGPT,” says Goanta, “and then where we have certain types of industries or situations where it should be forbidden to have that.”

(42)