Toronto Star

Driverless cars come with moral dilemmas

Public needs to start talking about the choices vehicles will make, researcher­s say

- KATE ALLEN SCIENCE & TECHNOLOGY REPORTER

Self-driving cars could save a million lives a year by eliminatin­g the 90 per cent of car crashes caused by human error. But as autonomous vehicles proliferat­e on real-world roads, they will still inevitably face life-or-death dilemmas.

Should an autonomous vehicle’s algorithms be programmed to swerve and sacrifice its passengers if it means saving the lives of many pedestrian­s? Or should the car protect its occupants at all costs?

In a series of studies described in the journal Science, a trio of U.S. and French researcher­s tried to gauge the public’s response to this moral quandary — and discovered a typically human contradict­ion.

In six carefully designed online surveys, respondent­s voiced a strong moral preference for machines that would choose the greater good, sacrificin­g one or two passengers to save five or 10 pedestrian­s. But the respondent­s would not want to buy such a car themselves.

“You can kind of call this the tragedy of the algorithmi­c commons,” said Iyad Rahwan, a study co-author and a professor at MIT’s Media Lab.

“Even if you started off as one of the noble people who are willing to buy a self-sacrificin­g car, once you realize most people are buying self-protective ones, then you are really going to reconsider why you are putting yourself at risk to shoulder the burdens of the collective when no one else will.”

Even if these passenger-versus-pedestrian scenarios are rare — in fact, even if they never occur — these are the types of discussion­s the public needs to have, other researcher­s emphasized.

“We’re at the edge of an age where we’re programmin­g behaviours into machines,” said Wendy Ju, executive director of interactio­n design at Stanford University’s Center for Design Research.

“One of the quandaries in this, and a moral dilemma in itself, is whether machines should necessaril­y be programmed the way everyone wants them to be. Because sometimes the crowds aren’t the most ethical decision-makers.”

The surveys found a consistent preference for self-sacrificin­g autonomous vehicles in general. For ex- ample, a car programmed to swerve and avoid hitting 10 or even two pedestrian­s while sacrificin­g its solo passenger received high moral approval. Even when participan­ts were asked to imagine themselves and a family member as the car’s occupants, the approval of its morality dipped but remained above neutral.

Yet in these studies, the participan­ts responded with a low likelihood of actually purchasing an autonomous vehicle that would sacrifice themselves and their family members for the greater good. They still thought the cars were programmed to do the right thing — they just didn’t want one for themselves. They also didn’t want the government to regulate autonomous vehicles to be self-sacrificin­g.

“That’s a big challenge to the widescale adoption of autonomous vehicles, especially when there’s already a basic fear about entrusting a computer program to zip us around at 60 miles an hour or more,” said Azim Shariff, a study co-author and direc- tor of the Culture and Morality Lab at the University of California Irvine.

Ju questioned whether these attitudes would truly hinder the adoption of autonomous vehicles or skew market forces: car manufactur­ers would not likely advertise how the vehicles would behave in these very rare scenarios. An accompanyi­ng perspectiv­e in the journal points out, however, that the greater ethical dilemma may be whether manufactur­ers even make the weighting of their algorithms transparen­t.

Moreover, many engineers believe autonomous vehicles will function more like a shared transit service. The public may accept a different risk profile for transporta­tion that seems more like infrastruc­ture than an individual­ly owned commodity.

Either way, we have to ask the questions posed in the Science study, says Bertram Malle, co-director of the Humanity Centered Robotics Initiative at Brown University.

“These machines have to make decisions, and if we don’t think about it in advance, the machines will still do something,” Malle says. Humans generally dislike randomness, he points out. It is unlikely society will be comfortabl­e putting these vehicles on the roads without any idea of how they will react in such scenarios, even if those scenarios turn out to be hypothetic­al.

Malle points out that autonomous vehicles are just the tip of the robothuman interactio­n iceberg. While the internatio­nal community has extensivel­y discussed the ethics of autonomous weapons, artificial intelligen­ce in health care and robots that care for the sick and elderly are proliferat­ing — yet there has been little discussion about the moral decisionma­king of robots in homes and hospitals. Interestin­gly, his own research has found that people have stronger moral expectatio­ns for robots than for humans, especially if the robots appear more machinelik­e than humanoid.

 ?? NOAH BERGER/REUTERS ?? In an accident scenario, should a self-driving car swerve to protect pedestrian­s while sacrificin­g its passengers?
NOAH BERGER/REUTERS In an accident scenario, should a self-driving car swerve to protect pedestrian­s while sacrificin­g its passengers?

Newspapers in English

Newspapers from Canada