LAST ROADBLOCK FOR SELF-DRIVING CARS
The liability issue for autonomous vehicles must still be fully resolved
Lost in all the tangential information surrounding the fatal accident in May involving an autopiloted Tesla — was the driver, Joshua Brown, watching a Harry Potter movie? How many speeding tickets had he received in the past six years (eight)? And did the fact that he rode a motorcycle mean he was inherently a risk taker? The single most salient fact in this first fatality involving a (semi) autonomous vehicle is simply this: By Tesla’s own admission, Brown had engaged his car’s Autopilot self-driving mode and the brakes of his Model S were not applied.
And this opened a Pandora’s Box that could threaten the very future of self-driving cars.
It’s a conundrum that had avoided the blare of scandal — until Frank Baressi turned his tractortrailer left in front of Brown’s “Autopilot”-ing Model S. And it’s a question that could, in a worstcase scenario, put two monolithic industries — large insurance companies and giant automakers — in a high-stakes game of productliability “chicken.”
Who, when all is said and done, is responsible when a self-driving car gets in an accident?
First, it’s important to understand the basics of collision culpability. According to the Insurance Institute of Canada’s Automated Vehicles: Implications for the Insurance Industry in Canada, for as long as the automobile has been around, the responsibility for their safe conduct has rested squarely on the driver.
“Driver error,” says the first line of the report, “is responsible for most collisions.”
That’s why automobile insurance coverage has been, until now, completely straightforward: We, the humans behind the wheel, were at fault; therefore, we paid.
“Current insurance coverages and practices, however, are simply not designed for a world where human drivers are replaced by vehicles that can drive themselves,” says Paul Kovacs, the report’s author, noting that the “foundations for the personal automobile insurance industry in Canada — that driver errors contribute to most collisions and personal ownership of vehicles is the norm — may disappear.”
In plain English, if you’re not driving, you can’t be held at fault if the car you’re in does something untoward.
In even simpler English, automakers will have to accept responsibility if one of their self-driving wonders veers off its computer-directed course.
Automakers, long wary of product liability lawsuits, want no part of this.
In fact, almost all automakers — Volvo being the current exception — are counting on the exact opposite, hoping that governments will eventually pass some sort of legislation protecting them against liability. That’s why they state — as Tesla has done in the wake of Brown’s death — that “Autopilot is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. The automakers’ contention is simple: If you’re behind the wheel, you’re in “care and control” of the car, absolving them of any responsibility for the damage/injury/havoc their selfdriving technology might wreak.
If I am reading Kovacs’ Implications right, the auto insurance industry is saying hogwash. Indeed, 40 or 50 years hence, he contends, there will be absolutely no question of who’s responsible in an accident such as Brown’s. There will simply be no drivers. Maybe not even, if Google has its way, any steering wheels. Car accidents, therefore, will be the responsibility of the automakers, and the automakers alone. Indeed, one of the solutions Kovacs puts forward is treating the problem as a product-liability issue, much as the aviation industry does, with Tesla et al having to buy “blanket” insurance indemnifying them against anything untoward that might happen to, or as a result of, their cars.
Kovacs admits the current commingling of self-driving and conventional cars does confuse things. Was the driver speeding (early reports suggest Brown was not)? Did the truck driver, exhibit due caution and concern in executing his left turn?
Nonetheless, “as on-board computers begin to make driving decisions, responsibility for collisions will move beyond human drivers to include automakers, software developers, and maintenance professionals,” says Kovacs.
“With the introduction of automation, we need to determine the vehicle’s share of responsibility.”
In the case of Brown’s Tesla, that responsibility will be shared by Brown’s culpability and his Autopilot system’s admitted inability, according to Tesla, to “notice the white side of the tractor trailer against a brightly lit sky.”
Whatever the determination, I suspect that the “mouse” clauses — such as Tesla’s “Always keep your hands on the wheel, be prepared to take over at any time” — that automakers currently use to abdicate responsibility for their (semi) self-driving technologies will come under increasing fire. And Kovacs, president and chief executive of the Property and Casualty Insurance Compensation Corporation, agrees.
“Automakers can’t have it both ways,” he says, “proclaiming the incredible abilities of their selfdriving cars on the one hand and then disavowing any responsibility for mishaps their technology might create on the other.”
Indeed, long after Google has figured out how to safely navigate a blind driver all the way to the North Pole; even after we’ve developed sensors that can withstand the heat of a desert (or, worse yet, the salt and slush of a Canadian winter), the issue of exactly who is responsible for computer-controlled cars is going to blunt our supposedly autonomous future.