The future of transportation is turning out to be a public health hazard

By Ruth Reader

There are more ways to get around in urban metro areas than ever before. We have on-demand cabs like Uber and Lyft. Shareable bikes and scooters became so pervasive they were piling up in lakes. This new wave of transportation technology was supposed to make the world easier to navigate and most of all safer. That may still be the case—in the end. But in the murky middle, as futuristic cars, e-bikes, and scooters make their first entries onto roads, travel might become momentarily more perilous, not less.

The future of transportation is starting to look like a public health hazard. A study from the University of California and Stanford University surveyed two hospital emergency departments in California over a one-year period and found that 249 injuries were related to scooters. Uber is constantly having to develop new tools to show its commitment to safety because of issues with sexual assault and fake Ubers, where people fraudulently pose as Uber drivers to harm others. Self-driving cars have also seen their image crash in the last year, thanks to heavily publicized accidents and one fatality. Expectations for their ultimate arrival have been put off indefinitely.

The future of transportation is turning out to be a public health hazard | DeviceDaily.com

[Photo: Mika Baumeister/Unsplash]

But even certain features you might get in a standard new car are proving unreliable. A new study released by AAA says that pedestrian-avoidance technology—one of several new automated tools including emergency braking, lane-keeping, adaptive cruise control, limited autonomous driving, and collision warning—isn’t working when it’s needed most, according to the Wall Street Journal. It tends to sputter when it’s dark out.

Typically, that’s fine, because there’s a driver behind the wheel—or so the logic goes. But assistance features introduce a new problem, which is that drivers become overly reliant on them and tune out while driving.

No company has felt this pain as acutely as Tesla, which has cars with self-driving-lite technology called Autopilot. The company’s cars have been involved in several publicized crashes. CEO Elon Musk has said that all the incidents were the fault of an inattentive driver. That might be true, but it doesn’t reflect the complexity of the situation. In 2018, a Tesla crashed into a fire truck parked on the highway. The vehicle alerted the driver of the crash less than a second before impact and failed to engage emergency braking. Is 0.49 seconds enough time for a person to respond to a collision they didn’t see coming? Also, what happened to those emergency brakes?

There was once a belief that cars that drive themselves would be our caped saviors. In 2015, there was reason to believe that by 2050, self-driving cars would reduce death by car accident by 90%—and conceivably save tens of thousands of lives every year. Some researchers were so certain of the ultimate good of self-driving machines that one of the few safety concerns was around the Trolley Problem, whether an autonomous car in a collision would chose to save the most people possible or its own passengers.

“There will be situations where a car knows that it’s about to crash and will be planning how it crashes,” Andrew Moore, Carnegie Mellon’s computer science dean, said in a conversation with Adrienne LaFrance three years ago. “There will be incredible scrutiny on the engineers who wrote the code to deal with the crash. Was it trying to save its occupant? Was it trying to save someone else?”

But so far, initial accidents in the self-driving world have had nothing to do with trying to save someone else. They were just system failures. In one case, one of Uber’s self-driving cars just didn’t see a pedestrian as they were crossing a two-lane street at night. Of course, humans are also part of the problem. A look at California’s log of self-driving accidents shows that people like to rear-end driverless cars—even scooters have run into the back of a self-driving car. We just can’t seem to get along. But, short of pulling every single human car off the road and replacing them with self-driving cars, self-driving car companies are going to have to account for, well, humans.

Early self-driving pilots have been worrisome enough that car companies building self-driving cars have delayed estimates of when those cars will be available. During a conversation at the Detroit Economic Club in April, Ford CEO Jim Hackett said, “We overestimated the arrival of autonomous vehicles,” he said, referring to the industry at large.

The gradual implementation of self-driving cars on roads may very well increase car crashes. A prescient paper from the University of Michigan in 2015 alluded to this: “During the transition period when conventional and self-driving vehicles would share the road, safety might actually worsen, at least for the conventional vehicles.” The conclusion may turn out to be prophetic. It also suggested that self-driving cars won’t necessarily be better at driving than “an experienced, middle-aged driver.”

That’s not to say we shouldn’t try to build great technology that ultimately makes the world safer. There are promising self-driving shuttle pilots on university campuses, in limited, low-speed routes, and in certain cities like Washington, D.C., Las Vegas, and Detroit (though even these have had their issues). Experimentation can help find the right implementation for these various technologies, whether they’re scooters or self-driving buses.

But should humans suffer in the name of progress? And how much? That question could be answered with legislation to help guide how new transportation modes make it onto streets. Because otherwise, companies big and small will continue to do what they do—launch, scale, iterate, and change the way we live without accounting for the consequences.

Fast Company , Read Full Story

(60)