Robots Won’t Try To Kill Us, Says Stanford’s 100-Year Study Of AI

For many people, talking about artificial intelligence and its implications for the future of humanity inspires the conversational equivalent of that internet argument about the dress being blue or gold. Some people will see abundant possibility; others, the period at the end of humanity’s story, as they conflate AI with killer robots and super-intelligent machines that will come to regard us as pets—or worse.

A Stanford University-hosted project is under way to look past all that—past the pop-culture takes on AI, the warnings from tech thinkers, and the breathless hype about assistive AI tools in our phones and other devices. The project was set up to take the long view of AI—a very, very long view.

Its formal name: One Hundred Year Study on Artificial Intelligence. The study is an ongoing endowed project, and its goal is for a standing committee of scientists to regularly commission reports that take expansive looks at how AI will touch different aspects of daily life.

The first of those reports, the 28,000-word “Artificial Intelligence and Life in 2030,” has just been released. It’s the result of a yearlong dive into the likely effects that AI advancements will have on a typical North American city a little more than a decade from now.

“The portrayal of artificial intelligence in the movies and in literature is fictional,” says Peter Stone, a computer scientist at the University of Texas at Austin who was the lead author on the 2030 report. “It’s a misconception of people . . . that AI is one thing. We also found that the general public is either very positively disposed to AI and excited about it, sometimes in a way that’s unrealistic, or scared of it and saying it’s going to destroy us, but also in a way that’s unrealistic.”

As part of their analysis, Stone and his coauthors drill down into several aspects of future urban life where they say AI is either already upending the status quo or has the potential to do so. And while they avoid being prescriptive, preferring instead to provide a kind of jumping-off point for scientists, the public, lawmakers, and industry, AI in the kind of future they describe is pervasive, wielding significant influence.

In sectors that range from transportation to health care, education, and the workplace, the study presents AI as something a bit like the modern smartphone. It’s not that it’s literally taken over your life, but most people at the same time can’t imagine functioning without one.

Says the report on transportation: “Transportation is likely to be one of the first domains in which the general public will be asked to trust the reliability and safety of an AI system for a critical task. Autonomous transportation will soon be commonplace and, as most people’s first experience with physically embodied AI systems, will strongly influence the public’s perception of AI.”

In health care, the study argues that the current health care delivery system “remains structurally ill-suited” for rapidly deploying high-tech advances and AI capabilities. Looking ahead another 15 years, though, it foresees a time when sufficiently advanced AI systems “coupled with sufficient data and well-targeted systems” take away some computational types of tasks from physicians.

AI will also make it faster to extract insights from population-level data and make more personalized diagnoses and treatments possible. “Looking ahead, many tasks that appear in health care will be amenable to augmentation but will not be fully automated. For example, robots may be able to deliver goods to the right room in a hospital, but then require a person to pick them up and place them in their final location.”

Policing and public safety is another area where the study finds that potential abounds, though it’s fraught with complexity. Among the pros: AI could help policing become more targeted. As AI in fields like image quality and facial recognition improves, cameras will better help with crime prevention and prosecution “through greater accuracy of event classification” and in the processing of video to ferret out anomalies. AI can also help law enforcement with social network analysis.

“Law enforcement agencies are increasingly interested in trying to detect plans for disruptive events from social media, and also to monitor activity at large gatherings of people to analyze security,” the study argues. “There is significant work on crowd simulations to determine how crowds can be controlled. At the same time, legitimate concerns have been raised about the potential for law enforcement agencies to overreach and use such tools to violate people’s privacy.”

And when it comes to employment and the workplace, the study sees AI as replacing tasks rather than jobs, while also helping to create new kinds of jobs.

The authors conclude by saying they’ve found no cause for concern that AI poses an imminent threat to humanity. No machines with self-sustaining long-term goals and intent have been developed, they write, nor are they likely to be in the near future.

The 2030 report comes at a time when other institutions and corporations are dedicating financial resources and the attention of top researchers and scientists to similar studies into AI’s influence on our future.

The University of Cambridge, for example, has opened a new research center to study artificial intelligence. A few weeks after the release of the Stanford report, five tech companies—Amazon, IBM, Microsoft, Google, and Facebook—collectively announced their launch of a nonprofit called The Partnership on Artificial Intelligence to Benefit People and Society.

A report the White House published in October on AI, “Preparing for the Future of Artificial Intelligence,” argues that there’s a clear role for the government to play, a conclusion the Stanford team also makes in the 2030 report.

“One of our recommendations is to ensure that there are people at all levels of government that have expertise in artificial intelligence,” Stone says. “So that if and when there are policy decisions that either give a green light to some technology in a particular sector or limit it in some way, that it’s people who have a realistic view of what’s possible and not possible who are helping make those decisions.

“By either educating people in the policy decisions or trying to get people with AI expertise newly into those positions, we think that’ll maximize the chances of the correct decisions being made from a legal and policy perspective.”

When the One Hundred Year Study leadership convenes a study panel again in a few years, it will have several items on the agenda. One task will involve an assessment of the state of AI at that point and its progression since the first report. The studies will be continuously knitted together to form a continuum of understanding—a body of thought and research about the field that tries to also take the popular consciousness somewhere that movies and dark visions of the future don’t.

On that last point, “no” is the answer Stone gives to the question of whether we should be scared of robots getting smart enough to destroy us or of some other nefarious byproduct associated with AI.

“Just because we have a car that can drive itself, that doesn’t mean we’ll also have a robot that can fold your laundry or do something else useful for you,” Stone says. “Those tasks each require sustained research effort, and . . . it’s not that we’re better at solving one, so we automatically become better at solving others. That’s the leap people mistakenly make, and it’s why they become scared when there’s a breakthrough in one area. They say, ‘Oh, well. All of a sudden robots will now be able to do a lot of things we don’t want them to do and they’ll be able to do them spontaneously.’

“Any technology has upsides and potential downsides and can be used by people in evil ways,” he continues. “On balance, I’m highly optimistic that artificial intelligence technologies are going to improve the world.”

 

Fast Company , Read Full Story

(46)