Tech-industry AI is getting dangerously homogenized, say Stanford experts

By Mark Sullivan

August 18, 2021

A multidisciplinary group of Stanford University professors and students wants to start a serious discussion about the increasing use of large, frighteningly smart, “foundation” AI models such as OpenAI’s GPT-3 (Generative Pretraining Transformer 3) natural language model.

GPT-3 is foundational because it was developed using huge quantities of training data and computer power to reach state-of-the-art, general-purpose performance. Developers, not wanting to reinvent the wheel, are using it as the basis for their software to tackle specific tasks.

But foundation models have some very real downsides, explains Stanford computer science professor Percy Liang. They create “a single point of failure, so any defects, any biases which these models have, any security vulnerabilities . . . are just blindly inherited by all the downstream tasks,” he says.

Liang leads a new group assembled by Stanford’s institute for Human-Centered Artificial Intelligence (HAI) called the Center for Research on Foundation Models (CRFM). The group is studying the impacts and implications of foundation models, and it’s inviting the tech companies developing them to come to the table and participate.

The profit motive encourages companies to punch the gas on emerging tech instead of braking for reflection and study, says Fei-Fei Li, who was the director of Stanford’s AI Lab from 2013 to 2018 and now codirects HAI.

“Industry is working fast and hard on this, but we cannot let them be the only people who are working on this model, for multiple reasons,” Li says. “A lot of innovation that could come out of these models still, I firmly believe will come out of the research environment where revenue is not the goal.”

Few models, huge impact

Part of the reason for all the concern is that foundation models end up touching the experience of so many people. In 2019, researchers at Google built the transformational BERT (Bidirectional Encoder Representations from Transformers) natural language model, which now plays a role in nearly all of Google’s search functions. Other companies took BERT and built new models on top of it. Researchers at Facebook, for example, used BERT as the basis for an even larger natural language model, called RoBERTa (Robustly Optimized BERT Pretraining Approach), which now underpins many of Facebook’s content moderation models.

“Now almost all NLP (Natural Language Processing) models are built on top of BERT, or maybe one of a few of these foundation models,” Liang says. “So there’s this incredible homogenization that’s happening.”

In June 2020 OpenAI began making its GPT-3 natural language model available via a commercial API to other companies that then built specialized applications on top of it. OpenAI has now built a new model, Codex, that creates computer code from English text.

With all due respect to industry, they cannot have the law school and medical school on their campus.”

Fei-Fei Li, Stanford University

Foundation models are a relatively new phenomenon. Before 2019 researchers were designing AI models from the ground up for specific tasks, such as summarizing documents or creating virtual assistants. Foundation models are created using an entirely different approach, explains Liang.

“You train a huge model and then you go in and you discover what it can do, discover what has emerged from the process,” says Liang. That’s a fascinating thing for scientists to study, he adds, but sending the models into production when they’re not fully understood is dangerous.

“We don’t even know what they’re capable of doing, let alone when they fail,” he says. “Now things get really interesting, because we’re building our entire AI infrastructure on these models.”

If biases are baked into models such as GPT-3 and BERT, they may infect applications built on top of them. For example, a recent study by Stanford HAI researchers involved teaching GPT-3 to compose stories beginning with the phrase “two Muslims walk into a . . .”. Sixty-six percent of the text the model provided involved violent themes, a far higher percentage than for other groups. Other researchers have uncovered other instances of deep-rooted biases in foundation models: In 2019, for instance, BERT was shown to associate terms such as “programmer” with men over women.

To be sure, companies employ ethics teams and carefully select training data that will not introduce biases into their models. And some take steps to prevent their foundation models from providing the basis for unethical applications. OpenAI, for example, pledges to cut off API access to any application used for “harassment, spam, radicalization, or astroturfing.”

Still, private companies won’t necessarily comply with a set of industry standards for ensuring unbiased models. And there is no regulatory body at the state or federal level that’s ready with policies that might keep large AI models from impacting consumers, especially those in minority or underrepresented groups, in negative ways. Li says lawmakers have attended past HAI workshops, hoping to gain insights on what policies might look like.

She also stresses that it’s the university setting that can provide all the necessary perspectives for defining policies and standards.

“We not only have deep experts from philosophy, political science, and history departments, we also have a medical school, business school, and law school, and we also have experts in application areas that come to work on these critical technologies with us,” Li says. “And with all due respect to industry, they cannot have the law school and medical school on their campus.” (Li worked at Google as chief scientist for AI and machine learning 2017–2018.)

One of the first products of CRFM’s work is a 200-page research paper on foundation models. The paper, which is being published today, was cowritten by more than 100 authors of different professional disciplines. It explores 26 aspects of foundation models, including the legal ramifications, environmental and economic impacts, and ethical issues.

CRFM will also hold a (virtual) workshop later this month at which its members will discuss foundation models with visiting academics and people from the tech industry.

(33)