We need a public forum on AI, not a closed-door meeting in D.C.

 

By Matt Calkins

I’m disappointed in this week’s gathering of AI luminaries—and not just because I’d hoped to see Musk and Zuckerberg’s next meeting on pay-per-view. We need a nationwide dialogue about the implications and limits of AI, but the private gathering in Senator Chuck Schumer’s office isn’t the way to start.

Heavily represented were big tech and big labor. Left out was everybody else: the small businesses, the content creators, and the innovators. AI policy shouldn’t be decided by big players and imposed on the rest of us.

Having the wrong people in the room means having the wrong topics on the agenda. Big tech will prefer to discuss restraints on competition (in the name of safety), immunity from libel, and freedom to ingest publicly accessible data without limitation or disclosure.

These large companies know they’re vulnerable to AI competition from smaller firms. (Google’s memo entitled “We have no moat” explains it perfectly.) Let’s be sure they don’t win through regulation what they can’t earn in the open marketplace.

Big tech likes to talk about restricting AI proliferation—ostensibly for our protection. But who is protecting Australian Mayor Brian Hood or radio host Mark Walters, both of whom AI falsely identified as criminals? Or Getty Images, whose library was so thoroughly digested that its watermark started appearing on AI-generated images? Protecting regular people and businesses should be the first priority of AI regulation.

In our regulation, we must decide whether to treat AI more like a person or more like an algorithm. More-human regulation would treat AI as a creative entity, placing fewer restrictions on data input but more on the correctness of output. AI manufacturers would assume responsibility for things like libelous statements and autonomous-car errors. Less-human regulation would treat the AI as an algorithm, just a fancy way to translate data from input into output. In this case, the input is everything, and content creators need full recompense for the value created with their data. AI would be restricted from personal information and forced to disclose its datasets (as Europe will require).

Between the two models, I favor the latter. AI isn’t built to ensure truth and accuracy—it still needs people for that. When AI says something, it’s based on statistical likelihood that this is the thing that will be said next. We need to understand that though we ask for truth, AI gives us probability. (It’s even built to slightly deviate from its own assessment of the greatest likelihood, in order to sound more lifelike!)

 

Whether you’d rather treat AI as a creator liable for its own outputs, or as an algorithm remixing its input data, this is a debate we need to have. When we do, let’s hear from entrepreneurs, artists, musicians and writers. Next time we hold a forum on AI, let’s do it in public, and let’s talk about the issues that matter to all of us.


Matt Calkins is the founder and CEO of Appian.

Fast Company

(7)