CAN MACHINES MAKE MORAL DECISIONS?
THE WORLD is increasingly characterised by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres so much so that we now refer to these as “cyber-physical” systems.
We even have a term for this – the Fourth Industrial Revolution (4IR).
We all desire a data-driven artificial intelligence world which will serve us unconditionally. We want machines to act on, and even anticipate, our every whim and fancy, to clean our homes (robots), to monitor us (lifelogging), to transport us (autonomous vehicles), and to make stock market decisions on our behalf (automated trading).
Machines have no empathy and are amoral. The question I am intrigued by is: “Can machines be designed to make moral decisions?”
Let’s try a poser to see if we truly appreciate the context. Your car is careering out of control. You can steer in just two directions. On the left hand side is a group of six children which, if the car knocks into them, will almost certainly save your life, but with the ultimate cost of the children’s lives.
On the right hand side, death for you is certain as a 100-year-old oak tree stands. The oak tree will make no effort to absorb the momentum. What choice will you make? Heroic death, or survival with eternal self-blame?
The former is what helicopter pilot Eric Swaffer faced when he selflessly chose a fiery death for himself and his passenger, Leicester City soccer club chairman Vichai Srivaddhanaprabha, over that of a few supporters last year, by crash landing in open space away from the few supporters.
Now imagine that you are a programmer. How will you programme an autonomous self-driving vehicle to react to this moral dilemma?
If we used an Artificial Intelligence (AI) or observational data-driven deep-learning system, the vehicle may
I somehow think that you would prefer to have a supposedly indifferent human nurse over an indifferent machine
well learn and “see through our hypocrisy”, understand our subtle survival me-first instincts and career into the kids.
Imagine if machines had to algorithmically decide on hospital care, by unemotionally looking at the big picture of resource availability – finance, bed, surgeon, and medical equipment, the potential return on investment after treating you.
This form of ethical reasoning is called consequentialism, which means the decision should be judged in terms of its consequences.
I somehow think that you would prefer to have even a supposedly indifferent human nurse at reception, over an indifferent machine, if the decision concerned your loved one!
It is probable that AI-driven machines such as robots would more likely harm humans while carrying out operations than collude and rise against us. British tech philosopher Tom Chatfield has been very helpful with this challenge: “If my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear (to me) in advance together with its exact parameters. I might or might not agree, but I can’t say I wasn’t warned.”
It seems data scientists have much more to consider and learn than the already exciting combo of maths, statistics and computer science.
Dr Colin Thakur is the KZN NEMISA CoLab director. This effort is part of the “Knowledge for Innovation project”, and is our contribution towards #BuildingACapable4IRArmy.