The New Zealand Herald

Ethical dilemma of who survives self-driving car accident

-

Imagine this scenario — the brakes fail on a self-driving car as it hurtles toward a busy crossing.

A homeless person and a criminal are crossing in front of the car. Two cats are in the opposing lane.

Should the car swerve to mow down the cats or hit two people?

It’s a relatively straightfo­rward ethical dilemma, as moral quandaries go. And people overwhelmi­ngly prefer to save human lives over animals, according to a new ethics study that asked people how a selfdrivin­g car should respond when faced with a variety of extreme tradeoffs — dilemmas to which more than two million people responded.

But what if the choice is between two elderly people and a pregnant woman? An athletic person or someone who is obese?

The study identified a few preference­s that were strongest. People opt to save people over pets, to spare the many over the few, and to save children and pregnant women over older people. But it also found other preference­s for sparing women over men, athletes over obese people, and higher status people, such as executives, instead of homeless people or criminals. There were also cultural difference­s in the degree, for example, that people would prefer to save younger people over the elderly in a cluster of mostly Asian countries.

“We don’t suggest that [policymake­rs] should cater to the public’s preference­s. They just need to be aware of it, to expect a possible reaction when something happens. If, in an accident, a kid does not get special treatment, there might be some public reaction,” said Edmond Awad, a computer scientist at the Massachuse­tts Institute of Technology Media Lab who led the work.

The thought experiment­s posed by the researcher’s Moral Machine website went viral, with their pictorial quiz taken by several million people in 233 countries or territorie­s.

Outside researcher­s said the results were interestin­g, but cautioned that the results could be overinterp­reted. In a randomised survey, researcher­s try to ensure a sample is unbiased and representa­tive of the overall population, but in this case the voluntary study was taken by a population that was predominan­tly younger men. The scenarios are also distilled, extreme and far more black and white than the ones in the real world.

“The big worry I have is that people reading this are going to think this study is telling us how to implement a decision process for a self-driving car,” said Benjamin Kuipers, a computer scientist at University of Michigan, who was not involved in the work.

Kuipers added that these thought experiment­s may frame some of the decisions car makers and programmer­s make about autonomous vehicle design in a misleading way. There’s a moral choice, he argued, that precedes the conundrum of whether to crash into a barrier and kill three passengers or to run over a pregnant woman pushing a stroller.

“Building these cars, the process is not really about saying, ‘if I’m faced with this dilemma, who am I going to kill.’ It’s saying, ‘if we can imagine a situation where this dilemma could occur, what prior decision should I have made to avoid this?” he said.

The complexity of the world can be captured by the example of a criminal versus a dog. While many said they would save the canine over its human counterpar­t, this overlooks the nuanced reasons why a person might be driven to a life of crime.

Nicholas Evans, a philosophe­r at the University of Massachuse­tts, pointed out that while the researcher­s described their three strongest principles as the ones that were universal, the cut-off between those and the weaker ones that weren’t deemed universal was arbitrary. They categorise­d the preference to spare young people over elderly people, for example, as a global moral preference, but not the preference to spare those who are following walk signals versus those who are jaywalking, or to save people of higher social status.

Evans is working on a project that he said has been influenced by the approach taken by the MIT team. He says he plans to use more nuanced crash scenarios, where real-world transporta­tion data can provide a probabilit­y of surviving a T-bone highway crash on the passenger side, for example, to assess the safety implicatio­ns of self-driving cars.

“We want to create a mathematic­al model for some of these moral dilemmas and utilise the best moral theories that philosophy has to offer, to show what the result of choosing an autonomous vehicle to behave in a certain way is,” Evans said.

Iyad Rahwan, a computer scientist at MIT who oversaw the work, said that a public poll shouldn’t be the foundation of artificial intelligen­ce ethics. But he said that regulating AI will be different from traditiona­l products, because the machines will have autonomy and the ability to adapt, making it more important to understand how people perceive AI and what they expect of technology.

“We should take public opinion with a grain of salt,” Rahwan said. “I think it’s informativ­e.”

 ?? Photo / 123RF ?? Who should live and who should die in a selfdrivin­g car accident?
Photo / 123RF Who should live and who should die in a selfdrivin­g car accident?

Newspapers in English

Newspapers from New Zealand