When driver­less cars hit the mar­ket, blame for ac­ci­dents may shift to in­dus­try

At least one au­tomaker an­tic­i­pates li­a­bil­ity for op­er­at­ing fail­ures

The Washington Post Sunday - - POLITICS & THE NATION - BY ASH­LEY HALSEY III ash­ley.halsey@wash­post.com

If any­thing about driver­less cars can be con­sid­ered an old rid­dle, it is this one: The car is driv­ing it­self down a res­i­den­tial street when a woman push­ing a baby stroller sud­denly en­ters a cross­walk. Un­able to stop, should the car’s com­puter opt to hit mother and child, or veer off to strike a tree, al­most cer­tainly killing its pas­sen­gers?

That macabre sce­nario has been fod­der for ethi­cists al­most since the prospect that cars might drive them­selves first en­tered the hori­zon. It also, how­ever, pro­vides a sec­ond rid­dle: Re­gard­less of the choice made by the car’s com­puter, who pays for the dam­ages?

The car owner? The com­pany that built it? The soft­ware de­vel­oper?

Those ques­tions are be­ing de­bated nearly ev­ery­where that lawyers and in­sur­ance bro­kers meet these days. While state gov­ern­ments and the courts ul­ti­mately will de­cide them, many have been ad­dressed in a new study by one of the pre­em­i­nent le­gal au­thor­i­ties on au­ton­o­mous ve­hi­cles.

Bryant Walker Smith, a Uni­ver­sity of South Carolina law pro­fes­sor, ex­pands on the be­lief that there will be a shift in blame for a crash from the at-fault driver to the au­to­mo­tive in­dus­try and the con­glom­er­ate of man­u­fac­tur­ers and soft­ware de­vel­op­ers who de­sign and up­date car com­put­ers.

“To prove that an au­to­mated driv­ing sys­tem per­formed un­rea­son­ably, an in­jured plain­tiff would likely need to show ei­ther that a human driver would have done bet­ter or that an­other, ac­tual or the­o­ret­i­cal, au­to­mated driv­ing sys­tem would have done bet­ter,” Smith said.

Volvo, one of many au­tomak­ers ea­ger to mar­ket an au­ton­o­mous car, ac­knowl­edged that it ex­pects li­a­bil­ity will shift from the driver to the man­u­fac­turer.

“It is really not that strange,” An­ders Kar­rberg, vice pres­i­dent of gov­ern­ment af­fairs at Volvo Car Corp., told a House sub­com­mit­tee ear­lier this month. “Car­mak­ers should take li­a­bil­ity for any sys­tem in the car. So we have de­clared that if there is a mal­func­tion to the [au­ton­o­mous driv­ing] sys­tem when op­er­at­ing au­tonomously, we would take the product li­a­bil­ity.”

Pub­lic of­fi­cials and auto-car ad­vo­cates are fond of point­ing out that 94 per­cent of crashes are at­trib­uted to human er­ror, a fact that im­plies that re­mov­ing the human from be­hind the wheel might elim­i­nate most crashes. Not so. While com­puter-driven cars are ex­pected to re­duce crashes dra­mat­i­cally, just how much is spec­u­la­tion, and no­body in the field thinks col­li­sions will be­come a thing of the past. There are too many va­garies for any com­puter — or human driver — to deal with on the roads.

There also are im­pon­der­ables: If free­dom from driv­ing them­selves means peo­ple are more will­ing to travel by car, an over­all in­crease in miles trav­eled sug­gests there will be more crashes. How­ever, driver­less cars elim­i­nate two of the lead­ing causes of traf­fic fa­tal­i­ties: drunken driv­ing and speed­ing.

“Those of us who have been in the soft­ware world know that soft­ware has bugs, so there’s no per­fect so­lu­tion,” said Ash Has­sib, se­nior vice pres­i­dent at Auto and Home In­sur­ance, a firm that pro­vides sta­tis­ti­cal data to the in­sur­ance in­dus­try. “There is so much brain­power that goes on when driv­ing a car, so it will take a long time to teach a ma­chine all the pos­si­ble sce­nar­ios that could take place. Eighty per­cent of the sce­nar­ios will be quick, but try­ing to get to the last 20 per­cent is go­ing to take a very long time.”

In the most de­fin­i­tive pub­lic le­gal re­search to date on au­ton­o­mous cars, Smith’s 77-page pa­per says:

There will be a shift from driver li­a­bil­ity to product li­a­bil­ity, mak­ing the au­to­mo­tive in­dus­try the pri­mary li­a­bil­ity stake­hold­ers.

When man­u­fac­tur­ers im­ply their au­to­mated sys­tems are at least as safe as a human driver, they may face a mis­rep­re­sen­ta­tion suit in cases that con­tra­dict that ex­pec­ta­tion.

The ar­gu­ment that an au­to­mated sys­tem per­formed un­re­li­ably will be cen­tral to per­son­al­in­jury claims.

A key ques­tion in lit­i­ga­tion will be whether a human driver or a com­pa­ra­ble au­to­mated sys­tem would have per­formed bet­ter than the au­to­mated sys­tem in ques­tion.

An­other key ques­tion: Could a rea­son­able change in a ve­hi­cle’s au­to­mated sys­tem have pre­vented the crash?

There could be a higher stan­dard for au­to­mated ve­hi­cles. Smith cites a hy­po­thet­i­cal case in which two cars col­lide at an in­ter­sec­tion. One of the cars ran a stop sign, but it might be ar­gued that sys­tems in the other car should have rec­og­nized that the first car was go­ing so fast that it would not stop at the sign. So, should that car share blame for the crash?

In the shift from driver li­a­bil­ity to product li­a­bil­ity, plain­tiffs would pur­sue sig­nif­i­cant in­jury claims and usu­ally re­cover less, but if they pre­vail, they would re­ceive higher dam­ages. That’s largely be­cause an un­prece­dented level of data on the cause of the crash will be stored in the ve­hi­cles’ com­put­ers, vir­tu­ally re­plac­ing the post-crash in­ves­ti­ga­tion by a po­lice of­fi­cer who didn’t wit­ness the in­ci­dent.

“The stan­dard for rea­son­able safety is al­ways in­creas­ing, and au­to­mated driv­ing is no ex­cep­tion,” Smith said. “The tech­nolo­gies that will amaze us in the next few years could seem laugh­ably — or dan­ger­ously — anachro­nis­tic a decade later.”

To cover the cost of po­ten­tial li­a­bil­ity claims, au­tomak­ers will have to factor that risk into a sys­tem’s ini­tial sticker price — an es­ti­mate that later may pro­vide dra­mat­i­cally in­ac­cu­rate — or rely on fu­ture sales to cover past li­a­bil­ity claims.

“As I em­pha­size in my pa­per, it’s in­cred­i­bly dif­fi­cult to ac­cu­rately and pre­cisely pre­dict the ac­tual li­a­bil­ity costs at­trib­ut­able to a sin­gle au­to­mated ve­hi­cle over its life­time,” Smith said.

The U.S. Trans­porta­tion Department, in guid­ance is­sued to the in­dus­try last year, wor­ried that sorting out the li­a­bil­ity is­sue might de­lay in­tro­duc­tion of au­ton­o­mous cars. Smith dis­agrees. He points out that fed­eral au­thor­i­ties had the same worry in 1993 when au­tomak­ers were con­sid­er­ing the in­tro­duc­tion of ad­vanced sys­tems to help driv­ers while be­hind the wheel.

“In the in­ter­ven­ing two decades, how­ever, tra­di­tional au­to­mo­tive man­u­fac­tur­ers have widely re­leased many of these sys­tems,” Smith said. “These com­pa­nies have done so without re­ceiv­ing spe­cial ex­emp­tions from the gen­er­ally ap­pli­ca­ble product li­a­bil­ity regimes of each state.”

What’s more, he says, the mo­men­tum of an au­tomaker with an au­ton­o­mous car ready for mar­ket could bring about any nec­es­sary changes in li­a­bil­ity law.

“High-speed elec­tronic trad­ing of­fers an in­ter­est­ing anal­ogy for au­to­mated driv­ing. Both in­volve au­to­mated sys­tems op­er­at­ing in com­plex and rapidly chang­ing sit­u­a­tions in which real-time human su­per­vi­sion may be in­ef­fec­tive,” Smith said. “Only driv­ing in­volves ac­tual loss of life — which should in no way be min­i­mized — but in both con­texts the fi­nan­cial con­se­quences for a com­pany that gets some­thing wrong could con­ceiv­ably be in the bil­lions of dol­lars. But whereas au­to­mated driv­ing will be gov­erned largely by tort law, high-speed trad­ing is gov­erned largely by con­tract law, in which the par­ties set the terms.”

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.