Can existing laws regulate AI? The federal government and experts say yes

 

By Wilfred Chan

A new joint pledge by U.S. federal agencies to enforce existing laws upon cutting-edge AI tools is winning praise from tech law experts, who say it’s an effective rebuttal to industry arguments for self-regulation. 

The statement released earlier this week by the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau, and Equal Employment Opportunity Commission emphasizes that the agencies aren’t going to give artificial intelligence companies special treatment just because their products are new. 

“AI tools can turbocharge fraud and automate discrimination, and we won’t hesitate to use the full scope of our legal authorities to protect Americans from these threats,” said FTC Chair Lina Khan in an accompanying release. “Claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books.”

A number of pro-regulation experts say it’s a signal that agencies won’t wait around to take action.

“There’s a narrative out there that because technology is moving too fast, it can’t possibly be regulated, because how could regulation keep up?” says Emily M. Bender, a University of Washington linguistics professor and expert on large language models. “And I’m very heartened to see our regulatory agencies say no, what they’re regulating is the activities of corporations, and those are still subject to regulations even if they’re automated.”

Fights over AI are heating up across the country. Multiple lawsuits have been filed against the developers of AI image generators like Stable Diffusion as well as AI coding tools like GitHub’s Copilot, alleging they violated copyrights when scraping user-generated data. Meanwhile, a growing number of states are advancing legislation to control the use of automated decision tools in areas like hiring, lending, and criminal justice. 

Some tech industry leaders claim AI is too complex for current laws because even the tools’ creators don’t fully understand what they’re making. An open letter last month cosigned by Elon Musk, Steve Wozniak, and other tech figures proposed a six-month pause on training any AI system stronger than GPT-4, citing their unknown risks. The letter also called for “new and capable regulatory authorities dedicated to AI” and “robust public funding for technical AI safety research.”

Elizabeth Renieris, an international data privacy lawyer, reads the agencies’ statement as an implicit retort against the “pause” letter. “The companies really want us to focus on the mechanics of how this tech is working before we can think about the laws that should apply,” she says. But regulators are essentially responding, “[How it works] doesn’t actually matter, because the impact on people is the same. I thought it was a very strong statement.”

Some experts also praised the agencies for rejecting the tech industry’s “innovation” defense. Suresh Venkatasubramanian, a Brown University computer science professor who coauthored the White House’s recent Blueprint for an AI Bill of Rights, sees the statement as a warning against tech companies who assume they can get away with deploying risky products before they’re fully tested.

“This isn’t something that’s intrinsic to tech. This is purely a function of the success of tech advocacy from the private sector, saying ‘Leave us alone, we want to innovate,’” Venkatasubramanian says. “But now we’ve seen the consequences, and it’s time to shape up.”

 

Renieris, who contributed to the first draft of the EU’s groundbreaking General Data Protection Regulation, believes the U.S. agencies’ willingness to use established legal frameworks to regulate AI could “potentially be more effective” than the road taken by European regulators, who have lately focused on crafting legislation targeted at specific AI tools. Overly precise legislation could end up becoming an “exercise in futility” as technology quickly evolves, she says. 

But other regulation advocates caution that AI-specific rules are still needed. Alexandra Reeve Givens, president of the nonprofit Center for Democracy and Technology, says even highly motivated agencies will struggle to regulate AI if companies aren’t required to disclose their use of it. 

“Right now a job applicant doesn’t even know if their résumé is being screened by an AI tool, let alone if that tool was somehow designed in a way that violates our federal anti-discrimination laws,” Givens explains, citing one potential issue. 

“This is where I think Congress really does need to act, by mandating notice when someone is the subject of an AI-driven decision, and then coming up with meaningful standards for high-risk systems to be audited,” she adds. “We have to make sure they’re going through that responsible AI process before putting tools on the market.”

Fast Company

(3)