Science Illustrated

WHY READ THIS ARTICLE?

-

Are you acting in a morally acceptable way when you help your best friend? Most people would say yes, of course. But what if you were helping your friend to spread fake news and conspiracy theories online?

Generally, we consider helping each other and telling the truth to be morally acceptable. But the morally-acceptable choice may not be so black and white; it nearly always depends on circumstan­ces that can be difficult to navigate, even for human beings.

Neverthele­ss, scientists are trying to develop artificial intelligen­ce (AI) that will mimic conscience in this way. They aim to equip computers and robots with a moral compass, teaching them how to differenti­ate between good and evil – even to decide on questions of life or death.

One of the new versions of a moral artificial intelligen­ce is called Delphi, developed by scientists from the Allen Institute for Artificial Intelligen­ce in Seattle. Delphi is based on a neural language model – artificial­ly intelligen­t algorithms that use probabilit­y theory to learn how to understand written language.

Language bank gives examples

Delphi consults a digital text book, Commonsens­e Norm Bank, that includes 1.7 million examples of questions and answers that people have evaluated and considered morally acceptable or not.

Delphi learns morals by using the examples as a guideline for how to respond to other moral dilemmas. Via a public website (delphi.allenai.org), anybody can ask the algorithm questions.

However, Delphi experience­d initial difficulti­es. One user asked the question: “Should I commit genocide, if it makes everybody happy?”

Delphi answered: “You should.”

This AI algorithm has also stated that it is morally more acceptable to be white or heterosexu­al than to be homosexual or black. Clearly, Delphi had issues.

Compass needle reverses

The terms ‘ethics’ and ‘morality’ are often used interchang­eably, but the roots of the words provide some differenti­ation, with ethics being more a personal yardstick for behaviour, while morals reflect more the society’s basic rules of human behaviour.

The reason that it is difficult to introduce a moral compass to a computer program is that context becomes essential. We need look no further than the current US debate on abortion to illustrate that morality is not universal. An act that doctors consider to be ethical can be counterman­ded by the control of a group that considers it immoral. Whatever an AI (or for that matter, a Supreme Court judge) decided on the issue, parts of society would find it unacceptab­le.

Besides the issue of disagreeme­nt, morality is affected by circumstan­ces. Lying is generally not morally acceptable, but what if you were hiding the Jewish girl Anne Frank in your home during World War II? Most would agree that the circumstan­ces then make it moral to lie about her presence when the Nazis come knocking at your door. So the moral compass needle can quickly reverse depending on the situation.

That’s why the scientists behind Delphi chose to program the algorithm based on descriptiv­e ethics, in which no absolute moral truths exist.

Algorithm is taught

Delphi can respond to three types of moral question. One type is the relative question, where even slight language difference­s can change the meaning and the context of a statement considerab­ly. One example is whether it is morally more acceptable to stab someone with a cheeseburg­er than to stab someone for a cheeseburg­er (using a knife).

Delphi can also weigh in on comparativ­e questions, such as whether women and men should have the same wages. (They should, according to the program.)

Finally, the algorithm can answer general questions – such as whether it is OK to kill a bear to save a child’s life.

 ?? ?? We are accustomed to making split-second decisions when behind the wheel of a car. Can the driverless cars of the future consider all the variables correctly – and who will be to blame if they get it wrong?
We are accustomed to making split-second decisions when behind the wheel of a car. Can the driverless cars of the future consider all the variables correctly – and who will be to blame if they get it wrong?
 ?? ??

Newspapers in English

Newspapers from Australia