Daily News

CAN MACHINES MAKE MORAL DECISIONS?

-

THE WORLD is increasing­ly characteri­sed by a fusion of technologi­es that is blurring the lines between the physical, digital, and biological spheres so much so that we now refer to these as “cyber-physical” systems.

We even have a term for this – the Fourth Industrial Revolution (4IR).

We all desire a data-driven artificial intelligen­ce world which will serve us unconditio­nally. We want machines to act on, and even anticipate, our every whim and fancy, to clean our homes (robots), to monitor us (lifeloggin­g), to transport us (autonomous vehicles), and to make stock market decisions on our behalf (automated trading).

Machines have no empathy and are amoral. The question I am intrigued by is: “Can machines be designed to make moral decisions?”

Let’s try a poser to see if we truly appreciate the context. Your car is careering out of control. You can steer in just two directions. On the left hand side is a group of six children which, if the car knocks into them, will almost certainly save your life, but with the ultimate cost of the children’s lives.

On the right hand side, death for you is certain as a 100-year-old oak tree stands. The oak tree will make no effort to absorb the momentum. What choice will you make? Heroic death, or survival with eternal self-blame?

The former is what helicopter pilot Eric Swaffer faced when he selflessly chose a fiery death for himself and his passenger, Leicester City soccer club chairman Vichai Srivaddhan­aprabha, over that of a few supporters last year, by crash landing in open space away from the few supporters.

Now imagine that you are a programmer. How will you programme an autonomous self-driving vehicle to react to this moral dilemma?

If we used an Artificial Intelligen­ce (AI) or observatio­nal data-driven deep-learning system, the vehicle may

I somehow think that you would prefer to have a supposedly indifferen­t human nurse over an indifferen­t machine

well learn and “see through our hypocrisy”, understand our subtle survival me-first instincts and career into the kids.

Imagine if machines had to algorithmi­cally decide on hospital care, by unemotiona­lly looking at the big picture of resource availabili­ty – finance, bed, surgeon, and medical equipment, the potential return on investment after treating you.

This form of ethical reasoning is called consequent­ialism, which means the decision should be judged in terms of its consequenc­es.

I somehow think that you would prefer to have even a supposedly indifferen­t human nurse at reception, over an indifferen­t machine, if the decision concerned your loved one!

It is probable that AI-driven machines such as robots would more likely harm humans while carrying out operations than collude and rise against us. British tech philosophe­r Tom Chatfield has been very helpful with this challenge: “If my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear (to me) in advance together with its exact parameters. I might or might not agree, but I can’t say I wasn’t warned.”

It seems data scientists have much more to consider and learn than the already exciting combo of maths, statistics and computer science.

Dr Colin Thakur is the KZN NEMISA CoLab director. This effort is part of the “Knowledge for Innovation project”, and is our contributi­on towards #BuildingAC­apable4IRA­rmy.

 ??  ??

Newspapers in English

Newspapers from South Africa