Self-driving systems could mimic the moral decisions of human drivers
Automakers and technology companies working to develop fully self-driving vehicles are hopeful that they will significantly reduce the number of crashes on the roads. Many accidents, ranging from fender-benders to fatal wrecks, are caused by human error. Autonomous vehicles thus have the potential to eliminate crashes resulting from intoxication, distraction, or other dangerous driver behavior.
However, there have still been a number of concerns raised about the technology, ranging from reluctance to give up control of a vehicle to the possibility that automated taxis and trucks will eliminate thousands of jobs. One question involves how a self-driving vehicle would react in situations where a split-second moral decision is required.
Even if most if not all of the vehicles on the road were to be automated, self-driving vehicles would still frequently encounter unexpected hazards. These could include debris on the road as well as children or animals darting in front of the vehicle. One concern related to self-driving vehicles is how an autonomous driving system would respond to these situations.
A recent study by researchers at the Institute of Cognitive Science at Osnabruck University in Germany suggests that algorithms could be developed to match a human driver's ethical decisions in difficult situations. This conclusion was based on a test analyzing driver reactions to a number of unavoidable collisions.
"To be able to define rules and guidelines, a two-step process is needed," said Gordon Pipa, one of the scientists involved in the study. "First, the moral decisions of humans in critical situations have to be analyzed and understood. In the second step, this behavior needs to be described statistically, in order to derive rules which can then be used by machines."
A total of 105 participants—76 men and 29 women, with an average age of 31—took part in the study. Each participant wore immersive virtual reality equipment to simulate a drive through a suburban neighborhood in foggy conditions.
On nine occasions during the drive, two objects appeared in the road ahead – one in each lane. Objects included a range of inanimate and animated objects such as simulated men, women, children, and animals. Participants used arrow keys on a keyboard to choose which lane they wanted the vehicle to be in, thus deciding which object to save and which object to run over.
The researchers also looked at the effect of time pressure. In the first trial, participants had four seconds to make a decision. In the second trial, they had only one second.
Empty lanes were periodically included as a control, and driver error was assumed if the participant ran over the sole object in the road instead of avoiding it. Errors occurred only 2.8 percent of the time in the slower test, but rose to 12 percent in the second test.
The study could not confirm whether further distinctions, such as the age of a person in the road, affected the decisions. In general, participants were more likely to avoid children than adults and avoided dogs more than all other animals. The researchers suggest that these behaviors could be mimicked by autonomous systems where "values of life" are included.
"Human moral behavior can be explained and predicted with impressive precision by comparing the values of life that are associated with each human, animal, and inanimate object," said Leon Suetfeld, lead author of the study. "This shows that human moral decisions can in principle be explained by rules, and these rules can be adopted by machines."
The researchers note that while values of life would be unpopular if they discriminated based on factors such as age or gender, they suggest that the algorithms could still be useful in situations where a collision is unavoidable. For example, an autonomous vehicle might give humans a higher value of life than animals, but the algorithm could also allow the vehicle to try to avoid striking an animal commonly kept as a pet if it determined that the maneuver would cause only minor risk to a human.
Last year, a study by the Massachusetts Institute of Technology found inconsistent opinions on the ethical decisions self-driving vehicles should make. A series of surveys issued by the school found that respondents were in favor of systems that would try to minimize casualties in an unexpected incident, even if it meant swerving off the road and risking harm to the occupants of the vehicle. However, respondents also said they would be less likely to ride in a vehicle programmed this way.