The driver­less cars that are too good for the rest of us

They get in more crashes than hu­man-driven ve­hi­cles be­cause they are not pro­grammed to bend rules of the road now and again

The Daily Telegraph - Business - - Business Comment - James Tit­comb

Mi­nor car ac­ci­dents are not typ­i­cally deemed news­wor­thy enough to war­rant in­ter­na­tional cov­er­age, but that’s what hap­pened last week when a small shut­tle bus bumped into a de­liv­ery lorry in Las Ve­gas. The key dif­fer­ence this time was that the bus had no driver. In a trial of self-driv­ing ve­hi­cle tech­nol­ogy in the city, the bus was fit­ted with a se­ries of sen­sors and pro­ces­sors that al­lowed it to nav­i­gate a small loop of roads, fer­ry­ing vis­i­tors around with­out any­one at the wheel. The crash was also par­tic­u­larly no­tice­able be­cause it oc­curred just an hour into the first day of the poor ve­hi­cle’s trial, a de­but even most learner driv­ers would be em­bar­rassed by.

Ex­cept for one thing: the crash was not the driver­less car’s fault. The de­liv­ery lorry, and its hu­man driver, re­versed into the shut­tle, hav­ing failed to see it. There were no in­juries, and in fact, the driver­less tech­nol­ogy worked as re­quired: the shut­tle stopped as it sensed the lorry re­vers­ing in its di­rec­tion. It just couldn’t do any­thing about the other driver’s care­less­ness.

The in­ci­dent is only the lat­est in a string of driver­less car ac­ci­dents that have one clear thread run­ning through them: it was the other guy’s fault. Ear­lier this year, a driver­less car be­ing tested by Uber in Ari­zona was flipped on its side when driv­ing through a yel­low light, af­ter a hu­man-driven car at­tempt­ing to cross the junc­tion crashed into it. The hand­ful of in­ci­dents that Google’s au­ton­o­mous ve­hi­cles have been in­volved in have al­most all been caused by other cars.

These in­ci­dents might ap­pear to make the ar­gu­ments for driver­less ve­hi­cles stronger: ro­bots make bet­ter driv­ers than their fleshy coun­ter­parts. One could ar­gue, we need more driver­less cars on the road, and should, in fact, has­ten their de­vel­op­ment.

How­ever, it is not nearly that sim­ple. The statis­tics show that driver­less ve­hi­cles ac­tu­ally get into far more scrapes than hu­man-driven ones, even if they are not tech­ni­cally at fault. A 2015 study from the Univer­sity of Michi­gan’s Trans­porta­tion Re­search In­sti­tute found that self-driv­ing ve­hi­cles get into 9.1 crashes ev­ery mil­lion miles they drive, against 4.1 crashes for cars driven by hu­mans.

This ap­pears to be a con­tra­dic­tion. Driver­less cars get into more col­li­sions, but they are al­most never the driver­less car’s fault. How can they be safer, and yet be in­volved in more crashes?

The plau­si­ble an­swer is that driver­less cars ac­tu­ally turn hu­mans into worse driv­ers. While they are pro­grammed never to speed, to give way to oth­ers as much as pos­si­ble and gen­er­ally to obey ev­ery rule of the road – in other words, to be per­fect driv­ers – we are not.

And any­one who has ever seen a driver­less car in ac­tion can at­test to this: they turn in per­fect cir­cles, never cut­ting cor­ners, and would cer­tainly never jump a red light. If a per­son walks out in front of one, it will stop in­stantly, with su­per­hu­man re­flexes.

But this cre­ates prob­lems for the rest of us. We have grown so used to in­ter­act­ing with other hu­man driv­ers, an­tic­i­pat­ing their flaws and idio­syn­cra­sies, that per­fect ro­bots have us out of sorts. Pas­sen­gers on the driver­less shut­tle in Ve­gas did not re­mark at the lorry’s care­less­ness, but that their robot car failed to an­tic­i­pate it.

In the case of the Uber crash ear­lier this year, the driver at fault had il­le­gally cut across two lanes of traf­fic be­fore­hand, but the hu­mans in both lanes had seen this and held back; the driver­less car had not.

The tech in­dus­try has a phrase for these kinds of prob­lems: “You’re hold­ing it wrong”, coined from Steve Jobs’ now no­to­ri­ous ex­cuse given when a cus­tomer com­plained that his iPhone 4 wouldn’t pick up calls. It has now be­come a catch-all for blam­ing hu­mans for tech­no­log­i­cal faults. Driver­less cars are typ­i­cal of the “you’re hold­ing it wrong” prob­lem: the tech­nol­ogy might work flaw­lessly, but the hu­mans don’t.

When roads have no more hu­man driv­ers on them, we are likely to be much safer – hu­man er­ror is in­volved in 90pc of ac­ci­dents – but what about in the pe­riod un­til then?

The ar­rival of driver­less cars won’t be like flick­ing a switch. There will be a tran­si­tion pe­riod, most likely tak­ing decades, be­tween the first fully au­ton­o­mous ve­hi­cles and the last hu­man driv­ers on the road.

As well as po­ten­tially more ac­ci­dents dur­ing this pe­riod, there may also be wide­spread frus­tra­tion at cars obey­ing speed lim­its, or be­ing too po­lite – in 2015, a Google driver­less car was pulled over by po­lice for driv­ing too slowly. It’s pos­si­ble that pub­lic opin­ion to­wards driver­less ve­hi­cles, al­ready po­ten­tially shady, could worsen be­cause of their ex­cep­tional fi­delity to the rules.

Tech com­pa­nies are tak­ing ef­fort to com­bat this. The cars tested by Waymo, the unit spun out of Google last year, now drive more ag­gres­sively, cut­ting cor­ners and inch­ing for­ward at junc­tions.

It is a good ex­am­ple of how tech­nol­o­gists have to un­der­stand the im­per­fect world their tech­nol­ogy in­hab­its, and adapt to it. For driver­less cars to be­come a re­al­ity, they must deal with their big­gest prob­lem: the flaws of hu­man be­ings.

‘We’re used to deal­ing with other driv­ers’ flaws, so per­fect ro­bots have us out of sorts’

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.