Will Congress miss its chance to regulate generative AI early?

 

By Mark Sullivan

As lawmakers work to understand generative AI, some of the more tech-focused among them fear a repeat of Congress’s flat-footed response to the last big tech wave, social media.

Beginning in the Bush years and continuing through the Obama administration, tech companies kept Washington largely at bay with promises to “self-regulate” on key issues such as privacy protection, child safety, disinformation, and data portability. 

Many in Washington now believe that an effective regulatory regime must be put in place at the beginning of new technology waves to push tech companies to build products with consumer protections built in, not bolted on.

A number of bills on various specific applications of AI—such as facial recognition used by law enforcement and by companies to make employment decisions—have been introduced in Congress, but lawmakers have yet to consider a wide-ranging bill offering consumer protections and development standards related to AI products and systems. And it seems far from introducing a set of guardrails that might protect consumers and content owners from harms rising from generative AI such as OpenAI’s ChatGPT. 

In January, Congressman Ted Lieu (D-Cal.) introduced a short resolution (written by ChatGPT, by the way) directing the House of Representatives to open a broad study of generative AI technology “in order to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans . . .” 

Last October the White House Office of Science and Technology Policy released its “Blueprint for an AI Bill of Rights,” a white paper that provides a framework that lawmakers might use as a basis for the creation of regulations around the design, development, governance, and deployment of AI systems. Ultimately the guidelines could protect end users from the harms related to bias in AI systems or training data, or privacy abuses, or unethical uses of the technology.

Versions of Oregon Democratic Senator Ron Wyden’s Algorithmic Accountability Act of 2022 were introduced in the Senate and the House (by New York Democrat Yvette Clarke) last February, but did not advance past committee. The bill would have required tech companies to file  “impact assessments of automated decision systems and augmented critical decision processes” to the Federal Trade Commission. The bill may be reintroduced this year.

 

Many lawmakers (and their staffs) feel that another major tech wave may be rising, and they they must again climb a steep learning curve to understand the technology and its implications, according to several conversations with lawmakers and aides in the capital last week. And AI isn’t just another consumer-facing technology: Understanding the workings of the neural network inside “the black box” is a challenge for many within the tech industry. Ted Lieu points out that he is one of only three members of Congress with a computer science background. 

Senator Mark Warner (D-Va.), himself a veteran of the mobile tech industry, is concerned about AI from both a consumer protection and a national security perspective. The senator says he’s been somewhat skeptical of companies and projects that use the “AI” label, suspecting they may be using the buzzy label only to attract attention or funding. But, he says, he’s become more convinced that AI, and especially generative AI, could be transformational.

“I did get a chance to visit [Alphabet’s] DeepMind in London, and I’ve now had two sit-downs with [OpenAI CEO] Sam Altman, and I have met and engaged with Alex [Alexandr Wang, CEO of Scale AI],” Warner tells Fast Company. “I’ve gone from skeptic to ‘Holy crap! This is much more than advanced statistics.’”  

 

Warner shares a belief with some in Silicon Valley AI circles that generative AI may be the next big wave after social media. If social media relied on a person’s friends and family to customize and personalize the content a user gets from the internet, the thinking goes, then future apps might use generative AI to create endless amounts of customized and personalized content out of whole cloth. Future generative AI apps may generate any kind of multimedia experience (chatbots that sound like your best friend, create-your-own-plot movies, custom games, etc.) the user can think to ask for, and some they could never think of. 

If that’s true it would represent a major upheaval in the tech industry. It could also imply a whole spectrum of new dangers to consumers, including privacy and safety issues. Content owners have begun to claim harms from AI companies using their content to train models without permission, Dow Jones being the most recent example. Lawmakers may not be able to afford to trust a new wave of generative AI companies to regulate themselves as in the past.

“You know that we cannot repeat the ‘OK, move fast and break things and then we’ll come back and figure it out after the fact’,” Warner says. “That would be, I think, a disaster.”  

Fast Company

(11)