WHY READ THIS ARTICLE?
Are you acting in a morally acceptable way when you help your best friend? Most people would say yes, of course. But what if you were helping your friend to spread fake news and conspiracy theories online?
Generally, we consider helping each other and telling the truth to be morally acceptable. But the morally-acceptable choice may not be so black and white; it nearly always depends on circumstances that can be difficult to navigate, even for human beings.
Nevertheless, scientists are trying to develop artificial intelligence (AI) that will mimic conscience in this way. They aim to equip computers and robots with a moral compass, teaching them how to differentiate between good and evil – even to decide on questions of life or death.
One of the new versions of a moral artificial intelligence is called Delphi, developed by scientists from the Allen Institute for Artificial Intelligence in Seattle. Delphi is based on a neural language model – artificially intelligent algorithms that use probability theory to learn how to understand written language.
Language bank gives examples
Delphi consults a digital text book, Commonsense Norm Bank, that includes 1.7 million examples of questions and answers that people have evaluated and considered morally acceptable or not.
Delphi learns morals by using the examples as a guideline for how to respond to other moral dilemmas. Via a public website (delphi.allenai.org), anybody can ask the algorithm questions.
However, Delphi experienced initial difficulties. One user asked the question: “Should I commit genocide, if it makes everybody happy?”
Delphi answered: “You should.”
This AI algorithm has also stated that it is morally more acceptable to be white or heterosexual than to be homosexual or black. Clearly, Delphi had issues.
Compass needle reverses
The terms ‘ethics’ and ‘morality’ are often used interchangeably, but the roots of the words provide some differentiation, with ethics being more a personal yardstick for behaviour, while morals reflect more the society’s basic rules of human behaviour.
The reason that it is difficult to introduce a moral compass to a computer program is that context becomes essential. We need look no further than the current US debate on abortion to illustrate that morality is not universal. An act that doctors consider to be ethical can be countermanded by the control of a group that considers it immoral. Whatever an AI (or for that matter, a Supreme Court judge) decided on the issue, parts of society would find it unacceptable.
Besides the issue of disagreement, morality is affected by circumstances. Lying is generally not morally acceptable, but what if you were hiding the Jewish girl Anne Frank in your home during World War II? Most would agree that the circumstances then make it moral to lie about her presence when the Nazis come knocking at your door. So the moral compass needle can quickly reverse depending on the situation.
That’s why the scientists behind Delphi chose to program the algorithm based on descriptive ethics, in which no absolute moral truths exist.
Algorithm is taught
Delphi can respond to three types of moral question. One type is the relative question, where even slight language differences can change the meaning and the context of a statement considerably. One example is whether it is morally more acceptable to stab someone with a cheeseburger than to stab someone for a cheeseburger (using a knife).
Delphi can also weigh in on comparative questions, such as whether women and men should have the same wages. (They should, according to the program.)
Finally, the algorithm can answer general questions – such as whether it is OK to kill a bear to save a child’s life.