The Day

Self-driving systems could mimic the moral decisions of human drivers

- By Day Marketing

Automakers and technology companies working to develop fully self-driving vehicles are hopeful that they will significan­tly reduce the number of crashes on the roads. Many accidents, ranging from fender-benders to fatal wrecks, are caused by human error. Autonomous vehicles thus have the potential to eliminate crashes resulting from intoxicati­on, distractio­n, or other dangerous driver behavior.

However, there have still been a number of concerns raised about the technology, ranging from reluctance to give up control of a vehicle to the possibilit­y that automated taxis and trucks will eliminate thousands of jobs. One question involves how a self-driving vehicle would react in situations where a split-second moral decision is required.

Even if most if not all of the vehicles on the road were to be automated, self-driving vehicles would still frequently encounter unexpected hazards. These could include debris on the road as well as children or animals darting in front of the vehicle. One concern related to self-driving vehicles is how an autonomous driving system would respond to these situations.

A recent study by researcher­s at the Institute of Cognitive Science at Osnabruck University in Germany suggests that algorithms could be developed to match a human driver's ethical decisions in difficult situations. This conclusion was based on a test analyzing driver reactions to a number of unavoidabl­e collisions.

"To be able to define rules and guidelines, a two-step process is needed," said Gordon Pipa, one of the scientists involved in the study. "First, the moral decisions of humans in critical situations have to be analyzed and understood. In the second step, this behavior needs to be described statistica­lly, in order to derive rules which can then be used by machines."

A total of 105 participan­ts—76 men and 29 women, with an average age of 31—took part in the study. Each participan­t wore immersive virtual reality equipment to simulate a drive through a suburban neighborho­od in foggy conditions.

On nine occasions during the drive, two objects appeared in the road ahead – one in each lane. Objects included a range of inanimate and animated objects such as simulated men, women, children, and animals. Participan­ts used arrow keys on a keyboard to choose which lane they wanted the vehicle to be in, thus deciding which object to save and which object to run over.

The researcher­s also looked at the effect of time pressure. In the first trial, participan­ts had four seconds to make a decision. In the second trial, they had only one second.

Empty lanes were periodical­ly included as a control, and driver error was assumed if the participan­t ran over the sole object in the road instead of avoiding it. Errors occurred only 2.8 percent of the time in the slower test, but rose to 12 percent in the second test.

The study could not confirm whether further distinctio­ns, such as the age of a person in the road, affected the decisions. In general, participan­ts were more likely to avoid children than adults and avoided dogs more than all other animals. The researcher­s suggest that these behaviors could be mimicked by autonomous systems where "values of life" are included.

"Human moral behavior can be explained and predicted with impressive precision by comparing the values of life that are associated with each human, animal, and inanimate object," said Leon Suetfeld, lead author of the study. "This shows that human moral decisions can in principle be explained by rules, and these rules can be adopted by machines."

The researcher­s note that while values of life would be unpopular if they discrimina­ted based on factors such as age or gender, they suggest that the algorithms could still be useful in situations where a collision is unavoidabl­e. For example, an autonomous vehicle might give humans a higher value of life than animals, but the algorithm could also allow the vehicle to try to avoid striking an animal commonly kept as a pet if it determined that the maneuver would cause only minor risk to a human.

Last year, a study by the Massachuse­tts Institute of Technology found inconsiste­nt opinions on the ethical decisions self-driving vehicles should make. A series of surveys issued by the school found that respondent­s were in favor of systems that would try to minimize casualties in an unexpected incident, even if it meant swerving off the road and risking harm to the occupants of the vehicle. However, respondent­s also said they would be less likely to ride in a vehicle programmed this way.

Newspapers in English

Newspapers from United States