Tech Giants Team Up To Devise An Ethics Of Artificial Intelligence

The Terminator isn’t arriving anytime soon, but concern is growing that artificial intelligence is already so pervasive in society—and getting more so all the time—that there needs to be more focus on how it’s being used and potentially misused (even if by accident). Aside from futuristic killer robots, there are already real dangers ranging from faulty autonomous cars to algorithms used in hiring or recruiting that have an inadvertent bias against women or ethnic groups. The giants of artificial intelligence, especially as it affects consumers and businesses, have just joined together to form a nonprofit called the Partnership on AI, with founding members Amazon, DeepMind/Google, Facebook, IBM, and Microsoft.

It’s the latest effort to keep a collective eye on how AI is developed and used. OpenAI, founded in December 2015, has a similar goal of conducing research and conferences to promote responsible use of AI. It’s co-chaired by and Elon Musk and Y Combinator founder Sam Altman, with support from a long list of tech luminaries. Across the pond, in May, a European Union committee report promoted creating an agency and rules for the legal and ethical uses of robots.

“This partnership will provide consumer and industrial users of cognitive systems a vital voice in the advancement of the defining technology of this century,” reads part of a statement from IBM’s AI ethics researcher, Francesca Rossi.

The Partnership on AI announcement lays out an ambitious agenda for research to be conducted or funded by members, in partnership with academics, user group advocates, and industry experts. Topics on the research agenda include ethics, fairness, inclusivity, transparency, privacy, and interoperability. A recent white paper from IBM called “Learning to Trust Artificial Intelligence Systems” provides some hints as to what the Partnership on AI might be tracking. Authored by Guruduth Banavar, IBM’s chief science officer for cognitive computing, it basically expands the concept of garbage-in/garbage-out to now include garbage in-between.

Teaching a system with bad training data—be it inaccurate or biased—will lead to garbage. A small but notable example was Microsoft’s millennial-personality chatbot Tay. It learned from conversations it had with the public, and trolls quickly taught it to spout racist comments. (Microsoft quickly took Tay offline.) Subtler but more serious examples could include a hiring system that makes decisions based on the attributes of the most successful current employees and, lo and behold, recommends hiring only white men.

But there’s a garbage-in-the-middle concern, a neologism from Banavar called “algorithmic responsibility.” Even good data can become garbage depending on how algorithms learn from and process it. Banavar recommends clear explanations of what’s going on inside the black box so that people, even non-data-scientists, can audit the process.

Aside from a full robot uprising, there’s growing concern that AI will take over simply by stealing our jobs—whether it’s on a factory floor, by driving taxis, or doing paperwork in place of humans. Already law bots, for example, are taking over the duties of paralegals in doing the preliminary research of poring through documents for cases.

The Partnership on AI is set to grow quickly, recruiting a board with en equal split between corporate and non-corporate members—possibly academics or advocates. The announcement also says it will work with other organizations such as the Association for the Advancement of Artificial Intelligence (AAAI), as well as non-profit research groups including the Allen Institute for Artificial Intelligence. More announcements will come “in the near future,” it says.

 

Fast Company , Read Full Story

(27)