The real reason Amazon, Apple, and Google are some of the world’s biggest companies

By Ben Shneiderman

Artificial intelligence researchers and engineers have spent a lot of effort trying to build machines that look like humans and operate largely independently. Those tempting dreams have distracted many of them from where the real progress is already happening: in systems that enhance–rather than replace–human capabilities. To accelerate the shift to new ways of thinking, AI designers and developers could take some lessons from the missteps of past researchers.

For example, alchemists like Isaac Newton pursued ambitious goals such as converting lead to gold, creating a panacea to cure all diseases, and finding potions for immortality. While these goals are alluring, the charlatans pursuing them may have secured princely financial backing that would have been better used developing modern chemistry.

The real reason Amazon, Apple, and Google are some of the world’s biggest companies | DeviceDaily.com

Astrologers looked to models of the heavens for signs about the future.

[Image: Giovanni Francesco Barbieri/Wiki Commons]

Equally optimistically, astrologers believed they could understand human personality based on birthdates and predict future events by studying the positions of the stars and planets. These promises over the past thousand years often received kingly endorsement, possibly slowing the work of those who were adopting scientific methods that eventually led to astronomy.

As alchemy and astrology evolved, the participants became more deliberate and organized–what might now be called more scientific–about their studies. That shift eventually led to important findings in chemistry, such as those by Antoine-Laurent Lavoisier and Joseph Priestley in the 18th century. In astronomy, Johannes Kepler and Newton himself made significant findings in the 17th and 18th centuries. A similar turning point is coming for artificial intelligence. Bold innovators are putting aside tempting but impractical dreams of anthropomorphic designs and excessive autonomy. They focus on systems that restore, rely upon, and expand human control and responsibility.

Updating early AI dreams

In the 1950s, artificial intelligence researchers pursued big goals such as human-level computational intelligence and machine consciousness. Even during the past 20 years, some researchers worked toward the “singularity” fantasy of machines that are superior to humans in every way. These dreams succeeded in attracting attention from sympathetic journalists and financial backing from government and industry. But to me, those aspirations still seem like counterproductive wishful thinking and B-level science fiction.

Even the dream of creating a human-shaped robot that acted like a person has lasted for more than 50 years. Honda’s near-life-size Asimo and the web-based news reader Ananova got a lot of media attention. Hanson Robotics’s Sophia even received Saudi Arabian citizenship. But they have little commercial future.

By contrast, down-to-earth, user-centered designs for information search, e-commerce sites, social media, and smartphone apps have been wild successes. There is good reason that Amazon, Apple, Facebook, Google, and Microsoft are some of the world’s biggest companies–they all use more functional, if less glamorous, types of AI.

Today’s cell phones feature speech recognition, face recognition, and automated translation, which all use artificial intelligence technologies. These functions increase human control and give users more options, without the deception and theatrics of a humanoid robot.

The real reason Amazon, Apple, and Google are some of the world’s biggest companies | DeviceDaily.com
Investigators examine wreckage from Lion Air Flight 610 after its crash in the Java Sea in October 2018.
[Photo: Azwar Ipank/AFP/Getty Images]

Yielding control

Efforts that pursue advanced forms of computer autonomy are also dangerous. When developers assume their machines will function correctly, they often shortchange interfaces that would allow human users to quickly take control when something goes wrong.

The National Transportation Safety Board report on the deadly May 2016 Tesla crash called for automated systems to keep detailed records that would allow investigators to analyze failures. Those insights would lead to safer and more effective designs. These problems can be deadly. In the October 2018 crash of Lion Air’s Boeing 737 Max, a sensor failure caused the newly designed automatic pilot to steer the plane downwards. The pilots couldn’t figure out how to override those automatic controls to keep the plane in the air. Similar problems have been factors in stock market “flash crashes,” like the 2010 event in which $1 trillion disappeared in 36 minutes. And poorly designed medical devices have delivered deadly doses of medications.

The real reason Amazon, Apple, and Google are some of the world’s biggest companies | DeviceDaily.com
Successful automation often doesn’t look like a person–it just helps people do what they need to do.
[Photo: Flickr user Justice Ender]

Getting to human-centered solutions

Successful automation is all around: Navigation applications give drivers control by showing times for alternative routes. E-commerce websites show shoppers options, customer reviews, and clear pricing so they can find and order the goods they need. Elevators, clothes-washing machines and airline check-in kiosks, too, have meaningful controls that enable users to get what they need done quickly and reliably. When modern cameras assist photographers in taking properly focused and exposed photos, users have a sense of mastery and accomplishment for composing the image, even as they get assistance with optimizing technical details.

Without being human-like or fully independent, these and thousands of other applications enable users to accomplish their tasks with self-confidence and sometimes even pride.

A new report from a leading engineering industry professional group urges technologists to ignore tempting fantasies. Rather, the report suggests, developers should focus on technologies that support human performance and are more immediately useful.

In a flourishing automation-enhanced world, clear, convenient interfaces could let humans control automation to make the most of people’s initiative, creativity, and responsibility. The most successful machines could be powerful tools that let users carry out ever-richer tasks with confidence, such as helping architects find innovative ways to design energy-efficient buildings, and giving journalists tools to dig deeper into data to detect fraud and corruption. Other machines could detect–not contribute to–problems like unsafe medical conditions and bias in mortgage loan approvals. Perhaps they could even advise the people responsible on ways to fix things.

Humans are accomplished at building tools that expand their creativity–and then at using those tools in even more innovative ways than their designers intended. In my view, it’s time to let more people be more creative more of the time, by shifting away from the alchemy and astrology phase of AI research.

Technology designers who appreciate and amplify the key aspects of humanity are most likely to invent the next generation of powerful tools. These designers will shift from trying to replace or simulate human behavior in machines to building wildly successful applications that people love to use.


Ben Shneiderman is professor of computer science at the University of Maryland. This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Fast Company , Read Full Story

(4)