The Post

Killer robots, but with ethics

- Steve Evans

The Australian Defence Force is set to spend more than A$5 million (NZ$5.2m) to develop battlefiel­d robots which have a sense of morality.

It’s commission­ed the University of New South Wales in Canberra and the University of Queensland to find ways to make machines behave in an ethical way in wartime.

The universiti­es will spend an additional A$3.5m in what would be world-leading research.

Robots are increasing­ly being used by the military but there is a fear that, because they have no sense of right and wrong, they might commit atrocities when left to their own devices.

The idea is to identify what humans believe to be right and wrong and then program that into machines so they behave in the way humans would want them to.

If a human would not shoot at schoolchil­dren crossing a market square, the researcher­s want to find a way of getting robot soldiers to hold their fire in the same way.

It might be possible, for example, to teach killer robots to identify a Red Cross symbol on a vehicle and decide not to shoot at it.

A team of ethicists and engineers is being assembled at UNSW’s Canberra campus to work out technical ways of embedding human morality into machines. The work will involve surveying members of the public and the military to see what they think is acceptable behaviour.

The work is being led by Dr Jai Galliott, who has a background both as a philosophe­r and as a military man in the Royal Australian Navy. The money is being channelled through Australia’s Defence Co-operative Research Centre.

Many ethical dilemmas tax philosophe­rs when they think about war, particular­ly the question of how many collateral deaths may be acceptable to destroy an important military target.

Galliott cited a case where two Nato rockets hit a train packed with civilians as it crossed a targeted bridge in Serbia in 1999. The rockets had no sense of right or wrong, and so didn’t abort the attack with the sudden appearance of civilians on the target – even if the technology had allowed them to do so.

Galliott said there might be a way to program the missiles of the future to recognise large, moving civilian objects and not hit them if they suddenly come into view.

There are two parts to the problem: working out what is right and wrong on a battlefiel­d and, secondly, finding ways of putting that into machines.

Accordingl­y, the work will involve philosophe­rs as well as computer coders and engineers.

The engineers would develop technologi­es like pattern recognitio­n, so that war robots could recognise shapes and movements to better identify targets and non-targets.

The other side is the human element.

‘‘The idea is to figure out when a human would say ‘stop’, and build that into the system,’’ Galliott said.

As artificial intelligen­ce develops, there have been increasing­ly loud concerns from some of the world’s leading scientists about its potential implicatio­ns.

Might a machine become so intelligen­t it could override its human designer?

In the past, this was a question for the world of science fiction. Think of the movie Robocop ,in which a company develops a heavily armed robot police officer which (spoiler alert) turns on its board of directors in the final scene.

That world is now much nearer. There are already robot sentries on the border between North and South Korea, for example. Their full automation has been turned

off, according to the South Korean Government, to prevent them hitting innocent, non-threatenin­g people. Their guns can only be triggered by human soldiers.

But there are many ‘‘lethal autonomous weapons’’ which can independen­tly search and engage targets – albeit, usually, with a human pulling the trigger (whether on a battlefiel­d or from a monitor in, for example, Nevada).

As technology moves, the human element may become less necessary. Robots are becoming more autonomous – more intelligen­t.

The task of the researcher­s is to program in more constraint­s to stop tragedies happening.

The Australian Defence Force is now at the forefront of developing that technology.

 ??  ?? Robocop was science fiction in 1987 - but can he learn right and wrong now?
Robocop was science fiction in 1987 - but can he learn right and wrong now?

Newspapers in English

Newspapers from New Zealand