The Reporter (Lansdale, PA)

What happened to promise of self-driving cars?

- By Will Kaufman Edmunds

Tesla recently made headlines with the beta launch of its Full Self-Driving system. That system comes with a disclaimer saying, “It may do the wrong thing at the worst time, so you must always keep your hands on the wheel and pay extra attention to the road.”

Tesla’s system has impressive capabiliti­es, but it’s definitely not hands-free driving. A few years ago, news stories seemed to say that autonomous vehicles were just a few years away.

Well, it’s been a few years and autonomous vehicles are, alas, still in the future. Right now, there is no car on sale that can drive itself without requiring the driver to pay attention to the road and be prepared to take control of the vehicle. In fact, some automakers have slowed down their timelines.

Here are three reasons why you can’t buy a self-driving car today and one place you’re likely to find them first.

We’ve yet to define how “safe” is safe

It’s difficult to teach a machine to react correctly when faced with new or unpredicta­ble situations we frequently encounter while driving. Heaps of engineerin­g effort has gone into cracking this problem. But how do we determine when a vehicle is safe enough?

In order to be 95% certain that autonomous vehicles match the safety of human drivers, the cars would need to log 275 million failure-free autonomous miles, according to a report from the Rand Corp. And to prove that autonomous vehicles are even just 10% or 20% safer than humans, the requiremen­t jumps to billions of miles. Since 2009, autonomous tech company Waymo’s vehicles have driven a little more than 20 million miles.

Either manufactur­ers must spend potentiall­y decades testing small fleets or the public will wind up taking part in the process of testing them. The latter is only really acceptable if infrastruc­ture is in place to ensure the safety of drivers and pedestrian­s.

City and vehicle infrastruc­ture isn’t in place

Expecting individual autonomous vehicles to operate independen­tly is a recipe for disaster. Each vehicle would have to guess what all the others are doing. Each would rely only on its own limited view of the world, with sensors and cameras that can fail or be obstructed by poor weather or road debris.

Enabling vehicles to communicat­e with one another reduces the possibilit­y of unpleasant surprises and allows vehicles to make communal decisions to maintain speed and safety. Some cars already have the capability to perform such communicat­ion, but there are no rules in place to guarantee cars from different manufactur­ers will be able to communicat­e with one another.

Infrastruc­ture specific to autonomous vehicles such as smart traffic lights and camera systems could alert vehicles about pedestrian­s, cyclists and dangerous road conditions and help prevent accidents. Unfortunat­ely, it has yet to be determined who would pay for the necessary infrastruc­ture upgrades and whether Americans would be willing to accept more surveillan­ce on their roads.

Unclear on who is liable when an accident happens

As long as self-driving features require the driver to be ready to take control, the driver will remain liable for any accidents. Car manufactur­ers are only liable if there’s a fault in their vehicle. But what happens if an autonomous passenger car causes an accident? Is the manufactur­er liable because it designed the system that’s at fault?

Some states are trying to address the question. Florida passed a law saying that the person who initiates a trip in an autonomous vehicle is considered the operator, and while the law doesn’t explicitly establish liability, it is laying a foundation for how liability may be addressed. But the process is piecemeal,

Newspapers in English

Newspapers from United States