The ethics of self-driv­ing cars


It’s a fairly sim­ple choice on the sur­face: Does a driver swerve to avoid a dog cross­ing the road?

If that’s an easy choice, make it a lit­tle harder: Does a driver swerve to miss a pedes­trian on the road if the driver knows his or her own life will be put in dan­ger?

Mil­lions of peo­ple around the world make these choices and oth­ers like them ev­ery day. Some die as a re­sult.

When au­ton­o­mous ve­hi­cles get on the road in large num­bers in the next decade, ma­chines will be mak­ing these de­ci­sions, giv­ing rise to a grow­ing de­bate about what eth­i­cal and moral choices should be pro­grammed into selfdriv­ing cars.

“It’s a huge is­sue,” Bill Ford, chair­man of Ford Mo­tor Co., told a small group of re­porters over din­ner at the North Amer­i­can In­ter­na­tional Auto Show in Detroit. The dis­cus­sion in the in­dus­try is all about hard­ware and soft­ware for au­ton­o­mous ve­hi­cles and how soon they will be widely avail­able, but “no­body’s talk­ing about ethics,” Mr. Ford said.

“If this tech­nol­ogy is re­ally go­ing to serve so­ci­ety, then these kinds of is­sues have to be re­solved, and re­solved rel­a­tively soon,” he said.

Auto mak­ers and sup­pli­ers are spend­ing bil­lions of dol­lars de­vel­op­ing tech­nol­ogy to make cars au­ton­o­mous – in the in­ter­ests of mak­ing roads safer and re­duc­ing or elim­i­nat­ing the es­ti­mated 737 deaths ev­ery hour, ev­ery day of the year, from traf­fic ac­ci­dents around the world.

There are now sys­tems in place that will pull ve­hi­cles back into their lanes when they drift out of them; brake au­to­mat­i­cally if nec­es­sary; and warn driv­ers of cars in their blind spots.

But cars that drive them­selves will have to make choices that are now made by hu­mans.

The early data about the choices hu­mans want those ve­hi­cles to make are not en­cour­ag­ing.

“Peo­ple think cars should min­i­mize to­tal harm, but they don’t want to buy cars that are go­ing to di­min­ish their own safety,” said Iyad Rah­wan, an as­so­ciate pro­fes­sor at MIT who spe­cial­izes in col­lec­tive in­tel­li­gence and the so­cial as­pects of ar­ti­fi­cial in­tel­li­gence.

That’s the in­di­ca­tion from a se­ries of sur­veys he and col­leagues from France and Ore­gon con­ducted.

“Fig­ur­ing out how to build eth­i­cal au­ton­o­mous ma­chines is one of the thorni­est chal­lenges in ar­ti­fi­cial in­tel­li­gence to­day,” they wrote in a pa­per pub­lished in Sci­ence mag­a­zine last June.

Mr. Ford is con­cerned about who will be re­spon­si­ble for set­ting stan­dards and what those stan­dards will be.

“Ul­ti­mately, gov­ern­ment’s go­ing to have to play a role but then you say, well just the U.S. gov­ern­ment, how about the Chi­nese gov­ern­ment,” he said.

“I think we’re go­ing to have to have a global stan­dard be­cause we can’t have dif­fer­ent sets of ethics.”

The U.S. gov­ern­ment has set some high-level stan­dards and set up a spe­cial com­mit­tee of 25 peo­ple this week to ad­vise the Depart­ment of Trans­porta­tion on au­to­ma­tion and a num­ber of trans­porta­tion sys­tems. Ad­vis­ers in­clude Gen­eral Mo­tors Co. chair­man Mary Barra, Los Angeles Mayor Eric Garcetti and Ch­es­ley ‘Sully’ Sul­len­berger, the for­mer U.S. Air­ways pi­lot who landed a plane in the Hud­son River.

Prof. Rah­wan and MIT have set up an in­ter­ac­tive web­site called Moral Ma­chine that lays out 13 sce­nar­ios for po­ten­tial crashes in­volv­ing self-driv­ing ve­hi­cles, pas­sen­gers, pedes­tri­ans and an­i­mals and al­lows users to choose one of two out­comes in each of the sce­nar­ios.

The web­site has gone vi­ral sev­eral times, Prof. Rah­wan said, and re­searchers have col­lected 22 mil­lion de­ci­sions from 160 coun­tries that they hope will help reg­u­la­tors de­cide how to pro­gram cars.

In­for­ma­tion is still be­ing col­lected, he said, but there are quan­tifi­able dif­fer­ences in at­ti­tudes be­tween North Amer­i­cans and peo­ple from other re­gions.

The sim­u­la­tions in­clude choos­ing whether an au­ton­o­mous ve­hi­cle that has lost its brake func­tions should kill five pedes­tri­ans or five peo­ple in the ve­hi­cle.

It’s mor­bid and un­com­fort­able, Mr. Rah­wan ac­knowl­edged, but “we want peo­ple to feel the dis­com­fort of those who are try­ing to reg­u­late cars and try­ing to make these kinds of judg­ment calls on de­sign choices that have so­ci­etal im­pli­ca­tions.”

He is wor­ried about a back­lash if the ben­e­fits wrought by au­ton­o­mous ve­hi­cles are per­ceived to be un­fair.

It will likely be too dif­fi­cult for reg­u­la­tors to spec­ify how these ve­hi­cles should re­act in ev­ery sit­u­a­tion or even in many sit­u­a­tions, he said.

What may be rea­son­able, he said, is for who­ever sets the stan­dards to in­sist that pub­lic safety will come first and ve­hi­cle man­u­fac­tur­ers will be scru­ti­nized to make sure their cars don’t cause more deaths or in­juries than usual or is to be ex­pected.


How self-driv­ing cars weigh the safety of pedes­tri­ans ver­sus pas­sen­gers has been an on­go­ing eth­i­cal de­bate in the in­dus­try.

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.