Hitting the Books: Why AI needs regulation and how we can do it

Hitting the Books: Why AI needs regulation and how we can do it

Retraining workers for the high-tech jobs of tomorrow will be imperative, argues author Tom Kemp.

Hitting the Books: Why AI needs regulation and how we can do it | DeviceDaily.com
ASSOCIATED PRESS

The burgeoning AI industry has barrelled clean past the “move fast” portion of its development, right into the part where we “break things” — like society! Since the release of ChatGPT last November, generative AI systems have taken the digital world by storm, finding use in everything from machine coding and industrial applications to game design and virtual entertainment. It’s also quickly been adopted for illicit purposes like scaling spam email operations and creating deepfakes.

That’s one technological genie we’re never getting back in its bottle so we’d better get working on regulating it, argues Silicon Valley–based author, entrepreneur, investor, and policy advisor, Tom Kemp, in his new book, Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy. In the excerpt below, Kemp explains what form that regulation might take and what its enforcement would mean for consumers.

Hitting the Books: Why AI needs regulation and how we can do it | DeviceDaily.com
Fast Company Press

Excerpt from Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy (IT Rev, August 22, 2023), by Tom Kemp.


Road map to contain AI

Pandora in the Greek myth brought powerful gifts but also unleashed mighty plagues and evils. So likewise with AI, we need to harness its benefits but keep the potential harms that AI can cause to humans inside the proverbial Pandora’s box.

When Dr. Timnit Gebru, founder of the Distributed Artificial Intelligence Research Institute (DAIR), was asked by the New York Times regarding how to confront AI bias, she answered in part with this: “We need to have principles and standards, and governing bodies, and people voting on things and algorithms being checked, something similar to the FDA [Food and Drug Administration]. So, for me, it’s not as simple as creating a more diverse data set, and things are fixed.”

She’s right. First and foremost, we need regulation. AI is a new game, and it needs rules and referees. She suggested we need an FDA equivalent for AI. In effect, both the AAA and ADPPA call for the FTC to act in that role, but instead of drug submissions and approval being handled by the FDA, Big Tech and others should send their AI impact assessments to the FTC for AI systems. These assessments would be for AI systems in high-impact areas such as housing, employment, and credit, helping us better address digital redlining. Thus, these bills foster needed accountability and transparency for consumers.

In the fall of 2022, the Biden Administration’s Office of Science and Technology Policy (OSTP) even proposed a “Blueprint for an AI Bill of Rights.” Protections include the right to “know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” This is a great idea and could be incorporated into the rulemaking responsibilities that the FTC would have if the AAA or ADPPA passed. The point is that AI should not be a complete black box to consumers, and consumers should have rights to know and object—much like they should have with collecting and processing their personal data. Furthermore, consumers should have a right of private action if AI-based systems harm them. And websites with a significant amount of AI-generated text and images should have the equivalent of a food nutrition label to let us know what AI-generated content is versus human generated.

We also need AI certifications. For instance, the finance industry has accredited certified public accountants (CPAs) and certified financial audits and statements, so we should have the equivalent for AI. And we need codes of conduct in the use of AI as well as industry standards. For example, the International Organization for Standardization (ISO) publishes quality management standards that organizations can adhere to for cybersecurity, food safety, and so on. Fortunately, a working group with ISO has begun developing a new standard for AI risk management. And in another positive development, the National Institute of Standards and Technology (NIST) released its initial framework for AI risk management in January 2023.

We must remind companies to have more diverse and inclusive design teams building AI. As Olga Russakovsky, assistant professor in the Department of Computer Science at Princeton University, said: “There are a lot of opportunities to diversify this pool [of people building AI systems], and as diversity grows, the AI systems themselves will become less biased.”

As regulators and lawmakers delve into antitrust issues concerning Big Tech firms, AI should not be overlooked. To paraphrase Wayne Gretzky, regulators need to skate where the puck is going, not where it has been. AI is where the puck is going in technology. Therefore, acquisitions of AI companies by Big Tech companies should be more closely scrutinized. In addition, the government should consider mandating open intellectual property for AI. For example, this could be modeled on the 1956 federal consent decree with Bell that required Bell to license all its patents royalty-free to other businesses. This led to incredible innovations such as the transistor, the solar cell, and the laser. It is not healthy for our economy to have the future of technology concentrated in a few firms’ hands.

Finally, our society and economy need to better prepare ourselves for the impact of AI on displacing workers through automation. Yes, we need to prepare our citizens with better education and training for new jobs in an AI world. But we need to be smart about this, as we can’t say let’s retrain everyone to be software developers, because only some have that skill or interest. Note also that AI is increasingly being built to automate the development of software programs, so even knowing what software skills should be taught in an AI world is critical. As economist Joseph E. Stiglitz pointed out, we have had problems managing smaller-scale changes in tech and globalization that have led to polarization and a weakening of our democracy, and AI’s changes are more profound. Thus, we must prepare ourselves for that and ensure that AI is a net positive for society.

Given that Big Tech is leading the charge on AI, ensuring its effects are positive should start with them. AI is incredibly powerful, and Big Tech is “all-in” with AI, but AI is fraught with risks if bias is introduced or if it’s built to exploit. And as I documented, Big Tech has had issues with its use of AI. This means that not only are the depth and breadth of the collection of our sensitive data a threat, but how Big Tech uses AI to process this data and to make automated decisions is also threatening.

Thus, in the same way we need to contain digital surveillance, we must also ensure Big Tech is not opening Pandora’s box with AI.

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics  

(8)