The AI Guru Behind Amazon, Uber, and Unity Explains What AI Really Is

By Sean Captain

If you’ve ever gotten product recommendations on Amazon, you’ve seen Danny Lange’s handiwork. The same goes for Uber’s AI that books you a ride. The Danish computer scientist helped build the machine learning platforms that both companies use throughout their operations, from the engineering to the marketing departments. Lange has just done the same at video game platform maker Unity, with the goal of evolving robo characters into more complex and nuanced playing companions than a human could program.

Lange doesn’t shy away from the oft-hyped term “artificial intelligence”— provided the machines really do learn how to respond to users’ needs. But he’s skeptical of prospects for so-called artificial general intelligence, or AGI—the Westworld-style vision of a synthetic consciousness. Lange has equally strong views on what does not count as intelligent, such as Alexa and Siri, which follow scripts written by humans, rather than thinking for themselves. Lange should know: He lead the design GM’s OnStar, the first widespread computerized assistant, way back in the late 1990s.

Fast Company spoke with Lange about nuances between real and phony AI, misunderstandings in pop culture, and the prospect of a robot uprising. He also described emerging technologies such as adversarial networks—a battle of wits between AIs that forces each to get smarter. What follows are highlights from a longer conversation.

Fast Company: Can you define artificial intelligence? Is it even definable?

Danny Lange: To me, there are two key aspects. One is external, and one is internal. So the external one is really in the perception. Does the system seem to be very reasonable? Does it almost seem like there’s a human hiding behind the system, interacting with me and making me feel comfortable?

That doesn’t have to be voice. It can also just be going to an Amazon web page and shopping around. But I get this sense that the system knows a lot about what I want and helps me get the thing that I want.

The other side is the internal thing. This is where I think a disruption is under way. And that is the move . . . to systems that learn instead of being programmed. And because they learn from the data, they are able to capture much more nuanced patterns in our data than any programmer can ever do. When that comes together, I feel that we are crossing a line, and we start dealing with something that is truly AI.

 

FC: So, truly AI. Are we talking something like general intelligence?

DL: No. I think general intelligence is more of a philosophical discussion . . . I don’t know what exactly self-awareness is and conscious is . . . I don’t think the system is really reasoning to that extent. But it is still able to learn from the interaction and improve over time as more and more interactions take place.

FC: Is the term “AI” being used too broadly? I know some people don’t like, for example, using the term “artificial intelligence” to refer to machine learning.

DL: I think that the term has become more of a broad, almost marketing-driven term. And I’m probably okay with that. What matters is what people think of when they hear this. They think of systems that give people—the customer or the owner of the robot or whatever—the sense that this thing does have some kind of intelligence in its behavior, and it has the learning capability. I [can’t] think of an AI system that doesn’t have machine learning at its core.

FC: So is a system that reads CT scans or an MRI, looking for a tumor, is that AI?

DL: If it [learned from examples] hand-curated or hand-labeled by doctors, so doctors basically interpret it, that’s definitely not AI. It’s using machine learning technology, but they are missing the point by inserting human expertise into the loop. Because now we’re sort of back to human programming of the system . . . AI would have been giving the computer treatment data and results, [allowing it to] start developing an ability to do the diagnosis, propose some suggestions for treatment, measure the output of the treatment, and constantly adjust and learn.

 

FC: What are the other buzzword concepts that we should be thinking about beyond AI/ML?

DL: An adversarial network is a key. So for instance, I may build a machine learning system that detects a fake product review or detects fake news. But I could also have a machine learning system that learns to generate a fake product review or fake news . . . As one of them gets better at detecting fake news, the opponent gets better at generating fake news, because it learns from the feedback loop.

FC: When I mention to friends that I’m writing something about AI, they often make a joke about computers taking over the world and killing us. Is that a legitimate fear?

DL: Maybe five, 10 years ago, I often used this scary but realistic scenario. [First] you have a drone [that] a machine learning system has learned to fly on its own. And nowadays we do have those. Secondly, you equip that drone with a high-definition camera, and you put computer vision with face-recognition software in there. It will recognize “bad people”—people you don’t like, people in places they are not supposed to be. Then thirdly, you equip that thing with the ability to eliminate those people.

Is it feasible? Yep. So, that’s not really a distant future. You could do that today . . .

FC: I understand that a machine could kill people. But will a machine want to kill people? That seems to go back to that philosophical notion of consciousness.

 

DL: From a strict technical perspective, we always look for the rewards function that drives the machine . . . The rewards function in an Amazon system is, get the customer to click the purchase button. At Netflix, it’s get the customer to click on one of our TV shows. What is the rewards function of a drone? Find the bad guys and eliminate them . . . It’s really what you define as the end goal of the system [that matters].

FC: So if you don’t define it properly, you can have some unintended consequences?

DL: Yeah.

FC: I know a lot of people started freaking out when the two Facebook bots began to speak to each other in their own invented language. Is that as scary as some people thought it was?

DL: It’s not scary. It’s just that you have two learning systems. We have to get used to this. For years and years, for decades, moms and dads told their kids that computers can only do what they’re programmed to do. And they were wrong, because now the computers can learn. They can now change that behavior. In that case of communication between computers, if the rewards function is to optimize the computer’s ability to communicate with each other, they will probably change the language over time to optimize the communication—using fewer letters, using better confirmation on [whether they] agree or disagree, things like that.

FC: Are there things you hear people say—whether it’s the general public or marketing people—that make you cringe?

 

DL: I think a lot about the voice systems like Siri and Alexa. They’re more like branded, hardwired systems, built to give you a safe voice interaction with their corporate owners… All Siri’s jokes are written by creative writers in Cupertino—nothing that Siri learned.

You’re aware of this famous example where Microsoft launched a chatbot [named Tay] that actually learned from human interaction, yeah? It got pretty nasty. If you are a big major brand, like Apple, Google, or Amazon, you can’t have that. So that’s why these systems are highly branded experiences, which apparently people really like, and that’s fine. But they are not AI.

FC: Anything else that you see as common misunderstandings?

DL: We are often focused on the risks and the problems, but there are also pretty impressive things in the area of using, say, computer vision to equip systems with the ability to see things. I saw an example of a system on a tractor that would look for weed in a field and basically, with an adjustable nozzle, spray Roundup on the weed only and not on your vegetables. So there are a lot of these technologies that can make our world greener, more sustainable. And sometimes there’s an overly strong bias on how these things are going to make our lives harder going forward.

FC: Anything else you think we need to know about AI?

DL: The key message is, you have a learning system, and that’s the disruption . . . Your computer can do more than it’s told to do because it gets the data and it learns from it, and the loop makes it improve endlessly.

 

 

Fast Company , Read Full Story

(23)