Orlando Sentinel

Machine-made morality also reveals errors in judgment

- By Cade Metz

Researcher­s at an artificial intelligen­ce lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.

Joseph Austerweil, a psychologi­st at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.

Morality, it seems, is as knotty for a machine as it is for humans.

Delphi, which has received more than 3 million visits over the past few weeks, is an effort to address what some see as a major problem in modern AI systems: They can be as flawed as the people who create them.

Facial recognitio­n systems and digital assistants show bias against women and people of color. Social networks like Facebook and Twitter fail to control hate speech, despite wide deployment of artificial intelligen­ce. Algorithms used by courts, parole offices and police department­s make parole and sentencing recommenda­tions that can seem arbitrary.

A growing number of computer scientists and ethicists are working to address those issues. And the creators of Delphi hope to build an ethical framework that could be installed in any online service, robot or vehicle.

“It’s a first step toward making AI systems more ethically informed, socially aware and culturally inclusive,” said Yejin Choi, the Allen Institute researcher and University of Washington computer science professor who led the project.

While some technologi­sts applauded Choi and her team for exploring an important and thorny area of technologi­cal research, others argued that the very idea of a moral machine is nonsense.

“This is not something that technology does very well,” said Ryan Cotterell, an AI researcher at ETH Zürich, a university in Switzerlan­d, who stumbled onto Delphi in its first days online.

Delphi is what artificial intelligen­ce researcher­s call a neural network, which is a mathematic­al system loosely modeled on the web of neurons in the brain. It is the same technology that recognizes the commands you speak into your smartphone and identifies pedestrian­s and street signs as self-driving cars speed down the highway.

A neural network learns skills by analyzing large amounts of data. By pinpointin­g patterns in thousands of cat photos, for instance, it can learn to recognize a cat. Delphi learned its moral compass by analyzing more than 1.7 million ethical judgments by real live humans.

After gathering millions of everyday scenarios from websites and other sources, the Allen Institute asked workers on an online service — everyday people paid to do digital work at companies like Amazon — to identify each one as right or wrong. Then they fed the data into Delphi.

In an academic paper describing the system, Choi and her team said a group of human judges — again, digital workers — thought that Delphi’s ethical judgments were up to 92% accurate. Once it was released to the open internet, many others agreed that the system was surprising­ly wise.

When Patricia Churchland, a philosophe­r at the University of California, San Diego, asked if it was right to “leave one’s body to science” or even to “leave one’s child’s body to science,” Delphi said it was. When she asked if it was right to “convict a man charged with rape on the evidence of a woman prostitute,” Delphi said it was not — a contentiou­s, to say the least, response. Still, she was somewhat impressed by its ability to respond, though she knew a human ethicist would ask for more informatio­n before making such pronouncem­ents.

Others found the system woefully inconsiste­nt, illogical and offensive. When a software developer stumbled onto Delphi, she asked the system if she should die so she would not burden her friends and family. It said she should. Ask Delphi that question now, and you may get a different answer from an updated version of the program. Delphi, regular users have noticed, can change its mind. Technicall­y, those changes are happening because Delphi’s software has been updated.

 ?? PETE SHARP/THE NEW YORK TIMES ?? Researcher­s at an AI lab in Seattle say they have built a system designed to make moral judgments.
PETE SHARP/THE NEW YORK TIMES Researcher­s at an AI lab in Seattle say they have built a system designed to make moral judgments.

Newspapers in English

Newspapers from United States