AI could bring “nightmare scenarios,” warns Amnesty International

By Steven Melendez

June 13, 2018

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. (Rahim, in her blog post, calls for Google to end its involvement with Maven immediately).

It’s unclear which other tech companies are still involved in the project and in what capacity, as many have declined to comment.

For Amnesty International, one concern is that potentially deadly AI systems will operate on the battlefield with limited human supervision.

“Compliance with the laws of war requires human judgement–the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Advocates have previously said lethal autonomous weapons should be subject to bans similar to ones used to restrict chemical weapons and landmines internationally.

Last month, Amnesty International and the digital rights group Access Now also circulated what they call the Toronto Declaration, proposing a set of principles to protect people against discrimination from biased AI systems. Among them:

 

• designing and developing AI systems to avoid discrimination and putting “effective remedies” in place in case it does arise

• including a diverse set of experts in developing AI systems to ensure they do adhere to non-discrimination principles

• updating existing antidiscrimination laws and regulations to make sure they address risks from AI

• maximizing transparency around government use of AI, including ensuring systems are auditable and documented to the public

“We’re now calling on all tech companies to endorse the Toronto Declaration, and to affirm their commitment to respecting human rights when developing AI,” Rahim writes.

 

(30)