The Big Pic­ture

What au­ton­o­mous cars can teach us about driv­ing

Motor Trend - - Contents - An­gus Macken­zie

The moral maze

You’re rolling down the free­way in heavy, fast­mov­ing traf­fic, fol­low­ing a truck. On your right is a new Volvo XC90 with a “Baby On Board” sticker in the win­dow. On your left, a sub­ur­ban out­law in a $500 leather jacket blat-blat-blat­ting along on his Har­ley-david­son. Sud­denly, a large, heavy ob­ject falls off the back of the truck, right in your path. There’s no chance of stop­ping. What do you do? Stay in your lane and brace for the head-on hit? Swerve left and take out the mo­tor­cy­clist? Or dive to the right and ram the Volvo?

Chances are you’ll sim­ply re­act, stomp­ing the brake pedal and swing­ing the wheel one way or the other. You’ll only think of the con­se­quences—the dead mo­tor­cy­clist or the badly in­jured baby—when the shock wears off and your hands stop shak­ing and you’re ly­ing in bed in the dark won­der­ing if you’ll ever sleep again.

But what if you did have the abil­ity to an­a­lyze the sit­u­a­tion in real time as it un­folded in front of you and log­i­cally de­ter­mine a course of ac­tion? What would you do? Pri­or­i­tize your own safety by aim­ing for the mo­tor­cy­clist, min­i­mize the dan­ger to oth­ers at the cost of your own life by not swerv­ing, or take the mid­dle ground and cen­ter­punch the Volvo, hop­ing its high crash safety rat­ing gives ev­ery­one a chance of sur­vival?

For­get bustling city streets and com­plex free­way in­ter­changes: Nav­i­gat­ing a moral maze like this is the tough­est task fac­ing au­ton­o­mous ve­hi­cles.

Pa­trick Lin is di­rec­tor of the Ethics + Emerg­ing Sciences Group at California Polytech­nic State Univer­sity, San Luis Obispo, and he con­structs grisly thought ex­per­i­ments like the one above to high­light the fun­da­men­tal is­sue fac­ing the de­ploy­ment of au­ton­o­mous ve­hi­cles on our roads. Au­ton­o­mous ve­hi­cles have the po­ten­tial to dra­mat­i­cally re­duce the in­ci­dences of death and in­jury on our roads, ease con­ges­tion, and re­duce emis­sions. The prob­lem is not the ca­pa­bil­ity of au­ton­o­mous ve­hi­cle tech­nol­ogy. It’s de­cid­ing how that ca­pa­bil­ity should be used.

In the crash sce­nario out­lined above, any con­se­quent death would be re­garded as the re­sult of an in­stinc­tual pan­icked move on the part of the driver, with no fore­thought or mal­ice. But what if that death was the re­sult of be­hav­iors pro­grammed into an au­ton­o­mous ve­hi­cle by an au­tomaker’s soft­ware coder in San Jose or Shang­hai? “That looks more like pre­med­i­tated homi­cide,” Lin says bluntly. Why? Be­cause op­ti­miz­ing an au­ton­o­mous ve­hi­cle to en­sure it min­i­mizes harm to its oc­cu­pants in such a sit­u­a­tion—some­thing we’d all want the one we’re rid­ing in to do—in­volves tar­get­ing what it should hit.

A crash is a cat­a­strophic event, but at its core is a sim­ple cal­cu­la­tion: Force equals mass times ac­cel­er­a­tion. De­sign­ing a ve­hi­cle that helps its oc­cu­pants sur­vive a crash is there­fore fun­da­men­tally an ex­er­cise in slow­ing its rate of de­cel­er­a­tion (the nu­mer­i­cally neg­a­tive form of ac­cel­er­a­tion) dur­ing the crash event, usu­ally by en­gi­neer­ing crum­ple zones around a strong cen­tral pas­sen­ger cell.

Au­ton­o­mous tech­nol­ogy adds an ac­tive el­e­ment to that cal­cu­lus: When a col­li­sion is un­avoid­able, it has the po­ten­tial to be able to di­rect the ve­hi­cle to hit the small­est and light­est of ob­jects—the mo­tor­cy­cle rather than the Volvo, for ex­am­ple—to en­hance the prob­a­bil­ity its oc­cu­pants will sur­vive. That out­come is the di­rect re­sult of an al­go­rithm, not in­stinct. So who bears re­spon­si­bil­ity for the death of the mo­tor­cy­clist? The pro­gram­mer who wrote the al­go­rithm? The au­tomaker that de­ter­mined such an al­go­rithm should be part of its au­ton­o­mous ve­hi­cle’s spec­i­fi­ca­tion? The per­son whose jour­ney put the mo­tor­cy­clist at risk?

Good ques­tions. No easy an­swers. Iron­i­cally, the de­bate over ethics and au­ton­o­mous ve­hi­cles high­lights an un­com­fort­able truth that’s too of­ten ig­nored when we mere hu­mans climb be­hind the wheel: Cars can kill. It’s not just ro­bots that should al­ways drive like some­one else’s life de­pended on it. n

What if you did have the abil­ity to an­a­lyze the sit­u­a­tion in real time as it un­folded in front of you? What would you do?

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.