HOW SAFE SHOULD WE EX­PECT SELF-DRIV­ING CARS TO BE?

In­dus­try look­ing hard at tech­no­log­i­cal re­al­ity

USA TODAY Weekend Extra - - NEWS - Bob O’Don­nell

FOSTER CITY, Calif. – With the re­cent tragic ac­ci­dent in­volv­ing a self­driv­ing Uber ve­hi­cle that struck and killed a pedes­trian out­side Phoenix, as well as unanswered ques­tions about a re­cent Tesla Model X crash in Sil­i­con Val­ley, there’s soul search­ing in the au­to­mo­tive tech world.

More peo­ple are also start­ing to think about set­ting re­al­is­tic ex­pec­ta­tions for self-driv­ing cars, the es­sen­tial ques­tion be­ing whether they can be ex­pected to com­pletely avoid fa­tal­i­ties or whether it’s good enough that they re­duce them.

Ob­vi­ously, there’s no sim­ple an­swer. The eth­i­cal im­pli­ca­tions are far-reach­ing. What makes this ques­tion par­tic­u­larly trou­ble­some is that it ties to­gether com­put­ing tech­nol­ogy with life-and­death con­se­quences. While there have been plenty of hy­po­thet­i­cal dis­cus­sions in the past, this in­ci­dent has made the pos­si­bil­ity of “death by ma­chine” a dis­turb­ing po­ten­tial re­al­ity.

Full de­tails on both in­ci­dents are still com­ing out, so there shouldn’t be any rush to judg­ment as to the ultimate cause of the ac­ci­dents. How­ever, the tech­nol­ogy built into au­ton­o­mous cars such as the ones in­volved gen­er­ates sig­nif­i­cant amounts of data that are al­ready mak­ing the process of de­ter­min­ing the cause much faster and more de­fin­i­tive than tra­di­tional in­ves­tiga­tive pro­cesses. This is why, for ex­am­ple, Tesla was able to re­port that it knew that Au­topi­lot, it’s par­tial self-driv­ing fea­ture, was en­gaged on the Model X that crashed last month in Mountain View, Calif., killing the driver.

From a tech­ni­cal per­spec­tive, many of the ques­tions about safety have to do with the sen­sors that col­lect all that data. While spe­cific im­ple­men­ta­tions vary by ven­dor, most self-driv­ing cars have a col­lec­tion of tra­di­tional cam­eras, radar and LiDAR (a type of sen­sor that bounces laser light off nearby ob­jects) built into them.

In the­ory, all of these components work to­gether to pro­vide the car with all the in­for­ma­tion it needs to make re­al­time driv­ing de­ci­sions. Im­por­tantly, radar and LiDAR have the abil­ity to essen­tially see through ob­jects, al­low­ing them to pro­vide views and per­spec­tives that can­not be seen by hu­mans.

This is rel­e­vant for the Uber ac­ci­dent in Tempe, Ariz., be­cause the tech­nol­ogy should have been able to “see” that there was a pedes­trian on the side of the road, even if she was hidden from hu­man view by cars or other ob­jects, and slam on the brakes. These ve­hi­cles are sup­posed to be able to see things that peo­ple can’t and re­act in ways that are faster and bet­ter than a hu­man ever could.

While many in the tech in­dus­try have fo­cused on con­ve­nience and new busi­ness models, the fun­da­men­tal ben­e­fit most car­mak­ers talk about — and most con­sumers want — is safety. In fact, Tesla CEO Elon Musk has even said that crit­ics of au­ton­o­mous cars are “killing peo­ple” by not en­abling their safety ben­e­fits.

Thank­fully, key au­to­mo­tive tech sup­pli­ers rec­og­nize this and have been fo­cused on rig­or­ous func­tional safety stan­dards. Af­ter sev­eral years of devel­op­ment, they are now able to sell or li­cense parts that meet the lat­est stan­dards. While those de­tails may never show up on your car’s spec sheets, they pro­vide an im­por­tant safety net for things such as re­dun­dant sys­tems and the abil­ity to op­er­ate in chal­leng­ing weather en­vi­ron­ments that are es­sen­tial for build­ing safer, more re­li­able cars.

As a re­sult of the Uber in­ci­dent, there also have been changes in au­ton­o­mous ve­hi­cle test­ing plans by tech com­pa­nies, as well as reg­u­la­tory per­mis­sions from gov­ern­men­tal agen­cies in­clud­ing the state of Ari­zona. Though tragic, the ac­ci­dent has trig­gered a level of dis­cus­sions on a tech­no­log­i­cal level as well as a so­ci­etal level that, frankly, should have oc­curred be­fore it hap­pened.

Real­is­ti­cally, it may be dif­fi­cult to pre­vent deaths com­pletely even with au­ton­o­mous ve­hi­cles, par­tic­u­larly be­cause both hu­man-driven and self­driv­ing cars will co­ex­ist for decades to come. To make the test­ing process safer, how­ever, it likely will re­quire dif­fer­ent ap­proaches. One par­tic­u­larly in­ter­est­ing ap­proach is to use sim­u­lated, vir­tual driv­ing en­vi­ron­ments, sim­i­lar to the new vir­tual re­al­ity-based Nvidia Drive Con­stel­la­tion sys­tem the com­pany un­veiled this week.

While sim­u­lated sys­tems can’t com­pletely re­place real-world tests, they can of­fer crit­i­cal ben­e­fits and re­duce po­ten­tial ac­ci­dents with devel­op­ment sys­tems. They en­able sig­nif­i­cantly more test miles to be driven and dif­fer­ent scenarios to be tested than can hap­pen with real-world driv­ing. This is im­por­tant be­cause the safety of au­ton­o­mous cars is highly de­pen­dent on the sys­tems in­side them be­ing able to rec­og­nize sit­u­a­tions they have “seen” be­fore and re­spond ap­pro­pri­ately. The more sit­u­a­tions they ex­pe­ri­ence, the safer they will be.

Chal­lenges for au­ton­o­mous cars still re­main, and the re­al­is­tic time frames for get­ting them onto the road likely will lengthen as a re­sult of these re­cent ac­ci­dents. Nev­er­the­less, they still rep­re­sent an im­por­tant step for­ward in im­prov­ing the over­all safety of ev­ery­one on the road.

GETTY IMAGES/ISTOCKPHOTO

MARK HENLE/ARI­ZONA REPUB­LIC

A self-driv­ing Uber cruises in Tempe, Ariz., last year. One of the com­pany’s ve­hi­cles hit and killed a woman last month.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.