Safety lesson from Autopilot tragedy
Human driver must remain until systems are deemed good enough without human input
ALMOST as soon as news broke of a fatal crash involving Tesla’s Autopilot last year, fans and detractors of the electric-car manufacturer have been clear on the tragedy’s causes.
Tesla’s supporters and investors never doubted that the system improved safety, so the driver must have failed to heed Tesla’s warnings and remain attentive.
Detractors and short investors are all but certain that Autopilot somehow failed to protect the car’s driver.
After more than a year of debate a conclusive answer is finally at hand, courtesy of a National Transportation Safety Board (NTSB) investigation whose final results were presented this week.
But the board’s findings aren’t likely to leave either side happy: Rather than blame man or machine alone, it seems both human drivers and the Autopilot system – specifically the complex relationship between the two – contributed to the deadly event.
At the heart of the matter is a dangerous dynamic: With billions at stake in the frantic race to develop self-driving car technology, there are huge incentives for carmakers to create the impression that vehicles for sale today are “autonomous”.
But as the NTSB made clear, no vehicle now on the market is capable of safe autonomous driving. When consumers take hi-tech hype at face value, a lethal gap between perception and reality can open.
Tesla reaped months of laudatory coverage and billions worth of market cap by presenting its Autopilot system as being more autonomous than any other advanced driver assist systems, even as it warned owners they must remain attentive and in control at all times. Though Autopilot did offer better performance than other advanced driver assistance systems, the key to its success was the lack of limitations Tesla put on its use.
Because Autopilot allows owners to drive hands-free anywhere, even on roads where Tesla has warned that such use would not be safe, the company has been able to profit off the perception that its system was more autonomous than others.
But Autopilot was designed for use on well-marked, protected highways with no chance of cross-traffic. So when the tractor-trailer turned across Florida’s Highway 27 last May and the Tesla slammed directly into it without triggering any safety systems, Autopilot was working exactly as designed.
The problem was that it was being used on a road with conditions it wasn’t designed to cope with, and the driver had apparently been lulled into complacency. Far from failing, Autopilot was actually so good that it led the driver to believe it was more capable than it really was.
This complex failure, which both man and machine contributed to, sounds an important warning about autonomous-drive technology: until the systems are so good they need no human input, the human driver must remain at the centre of “semi-autonomous” drive system design.
Engineers must assume that if there’s a way for people to misuse these systems, they will. Just as important, companies need to understand that if they over-promote a semi-autonomous drive system’s capabilities in hopes of pulling ahead in the race to autonomy, they run the risk of making the technology less safe than an unassisted human driver.
There’s a lesson to be learnt here from aviation. As computers and sensors improved in the 1980s, aircraft manufacturers began to automate more and more of the controls simply because they could. Only later did the industry realise that adding automation for the sake of automation made aircraft less safe, so they re-oriented autopilot development around the principle of “human-centric” automation. Only when automation is deployed in ways that are designed to improve pilot performance does safety actually improve.
If anything, this dynamic will be more pronounced with cars, which are used in much higher numbers than planes by people with much less training. But unlike aircraft companies, which join forces to improve safety across the industry, automakers and tech start-ups are in intense competition for the real or perceived lead in the race to autonomy. As long as consumers care more about the futuristic cool factor of hands-free operation than using technology to become safer drivers, the potential for a dangerous gap between the perception and reality of autonomous-drive technology remains. And what a shame it would be if this technology, which has the potential to someday save tens of thousands of lives every year, made cars less safe in the short term.
• Niedermeyer, an auto-industry analyst, is the co-founder of Daily Kanban and former editor of the blog The Truth About Cars.
For more columns from Bloomberg View, visit http://www.bloomberg.com/ view.
Both man and machine were to blame for this fatal crash, according to findings by the National Transportation Safety Board.