New Zealand Listener

This is war

The threat of weapons that could launch attacks without human input, including swarms of killer drones, is raising internatio­nal alarm.

- By Stuart McMillan

The threat of weapons that could launch attacks without human input, including swarms of killer drones, is raising internatio­nal alarm.

We are yet to learn – either from fleshand-blood or artificial intelligen­ce sources – how far AI will take humanity. Driverless cars and the replacemen­t of humans in certain tasks are already part of our existence, but it is in the design and making of weapons programmed with artificial intelligen­ce that some see a threat to the future of mankind.

The devices are known as lethal autonomous weapons systems, sometimes shortened to Laws. There is debate about how to define autonomous weapons but, to all intents and purposes, they are devices that can identify, track and attack a target without human interventi­on. It may take the form of a drone, a gun, or a robot that may or may not have a humanoid form. The key element of an autonomous weapon is that once activated – switched on, if you

like – it makes the decision itself about whether to attack a target, which may or may not be a human.

A United Nations meeting in Geneva in August considered whether there should be a ban on the developmen­t of such weapons. The UN Convention on Certain Convention­al Weapons had considered the issue several times already, most recently last November and in April. If ever a ban is declared, it will take the form of a protocol. Existing protocols cover mines, booby traps, incendiary devices and a number of other weapons. Nations are free to choose whether they sign or observe the protocols but the existence of the protocols helps countries become cautious about the weapons they use. The purpose of the convention, in its own words, is “to ban or restrict the use of specific types of weapons that are considered to cause unnecessar­y or unjustifia­ble suffering to combatants or to affect civilians indiscrimi­nately”.

Distinguis­hing between a weapon that is automatic and an autonomous weapon is to make judgments on a sliding scale. If you step on a landmine, it will explode and harm you, without, of course, having been programmed with any artificial intelligen­ce. But a landmine can only react, not choose to act, and is thus not considered an autonomous weapon. A drone that identifies, say, a truck and tracks it and sends a film back to a human, who may or may not send a missile to destroy it, is not an autonomous weapon. If the drone has the capacity to identify, track and attack the truck without reference to a human, it would be fully autonomous. No human would be in the loop.

Those who deal with weapons in various stages of automation make distinctio­ns of a weapon being “in the loop”, “on the loop” or “off the loop”. Being in the loop means that a human decides whether an attack will occur; being on the loop means that if a weapon decides that something or someone is chosen as a target, then there is still the possibilit­y that a human can switch it off; being off the loop means that the weapon decides to conduct the attack and no human oversees it.

Do fully autonomous weapons exist? Some weapons are very close to it and there is little doubt that others are being developed. They may still require refinement. Among those being deployed are the SGR-A1 built by Samsung, a sentry robot

Some weapons are very close to being autonomous and there is little doubt that others are being developed.

used along the Korean Demilitari­sed Zone that detects intruders, gives a verbal warning and alerts a soldier, who can use the robot’s machine gun. If the robot is in a fully autonomous mode it can fire the gun itself. Israel has a robot that hunts for radar signals and when it detects them crashes into whatever is sending them, destroying itself as well as the signal source. The Lockheed Martin AGM-158C, a long-range anti-ship missile, was tested in May. According to the US company, it “flew towards a moving maritime target using inputs from on-board sensors. The missiles then positively identified the intended target and impacted successful­ly.”

Some weapons require instant responses and there is not enough time for a human to make an assessment. These include anti-missile devices and some defensive aircraft-attack weapons. The Terminal High Altitude Area Defence (Thaad) systems and their variants are deployed by a number of countries, including the US, Russia, China, India, France and Israel. Thaad was recently installed in South Korea, much to China’s annoyance, because in Beijing’s view it upset the regional balance of power. Thaad is designed to detect and intercept incoming missiles that carry nuclear weapons.

The refinement­s on other weapons being developed might include miniaturis­ation. One of the developmen­ts feared is tiny drones with face recognitio­n software and a tiny gun. For a speculativ­e look at their potential, see tinyurl. com/NZLkillerd­rone. There are varying prediction­s about how far away autonomous weapons are but there are authoritat­ive estimates that they will be developed within years, not decades. This month, the Pentagon announced that it plans to put US$2 billion into research on adding artificial intelligen­ce to weaponry. Field commanders, who are reluctant to surrender human control over weaponry, want computers to be able to explain to them why a particular target has been chosen.

GUNPOWDER, NUKES, NOW THIS

Ethically, there is something to be said for destroying a machine rather than killing a person.

Some observers of autonomous weapons developmen­t believe that, if deployed, the devices would mark the third phase of warfare: after gunpowder and nuclear arms.

The August meeting of the UN Convention on Certain Convention­al Weapons was attended by states and also by a number of non-government organisati­ons with long histories of doing humanitari­an work, including the Internatio­nal Committee of the Red Cross and Human Rights Watch. Other opponents of the developmen­t of autonomous weapons have been formed specifical­ly for the purpose. The Campaign to Stop Killer Robots, which is co-ordinated by New Zealander Mary Wareham, is a coalition of non-government organisati­ons. Some formidable thinkers and world leaders in technology, including Elon Musk, Stuart Russell, professor of computer science at the University of California, Berkeley, and the late Stephen Hawking have warned against the developmen­t of autonomous weapons. Musk was an early supporter of the Future of Life Institute, an organisati­on mostly of scientists that works to ensure that the most powerful technologi­es will be beneficial to

mankind.

Many artificial-intelligen­ce specialist­s are wary of or have complained about the developmen­t of autonomous weapons, arguing they do not want to see their work made into weaponry. Google employees objected to links with arms manufactur­ers. An internatio­nal boycott of a Japanese university was lifted only after the university said that it had dropped its defence industry links.

Among the reasons for opposing the developmen­t of autonomous weapons, the profoundes­t ethical one is that a machine would be deciding whether someone should live or die.

A legal objection is that it would be hard to ensure that the action of the robot or other machine would conform to internatio­nal humanitari­an law, the fundamenta­l principles of which say that civilians should not be targeted in a war and that any offensive action should be proportion­ate. Another legal objection is that, at present, a soldier who fires a weapon bears some legal responsibi­lity. Whether legal responsibi­lity could be sheeted home if a robot was acting

independen­tly is an interestin­g point.

A further objection is that although a robot might be a more acute observer, be the store of much greater informatio­n and be able to react more quickly than a human, it would lack human common sense. One example used by those opposed to Laws is to imagine a child rushing at a soldier, whether robotic or human, pointing a toy gun and the child’s mother rushing after the child. The argument advanced is that a human soldier would be likely to grasp what was happening but a robot might not. Whether a robot could be programmed or taught to observe the laws of armed conflict is debatable. Against this it can be argued that tired and stressed human soldiers will sometimes make mistakes in combat.

THE BOTS ARE REVOLTING

Robots might also go on fighting once humans have declared a truce, unnecessar­ily prolonging a conflict. The easy-seeming solution of switching off a robot may not be simple in practice because the switching-off process might be exploited by an enemy.

Another reason for concern about the developmen­t of autonomous weapons is that their commercial production would mean that they would eventually reach the black market and become available to terrorists, non-state actors and authoritar­ian regimes. Imagine what could be done with an adaptation of facial recognitio­n technology and a tiny device that fired a bullet.

Other reasons for worrying about their developmen­t are that autonomous weapons might be able to be hacked and that technology can fail badly. Nuclear-strike false alarms have demonstrat­ed this on several occasions.

Yet there are serious arguments advanced by those advocating the developmen­t and deployment of autonomous weapons, including:

They can add strength to a defence force. The term used is force multiplier.

They may be valuable for some of the worst situations soldiers face; for instance, clearing mines, entering a house that is likely to be booby-trapped or dismantlin­g explosives.

Ethically, there is something to be said for destroying a machine rather than killing a person.

Strategica­lly, unless a country studies autonomous weapons, it would not know how to combat them.

Using robots may be cheaper than employing human soldiers.

I have so far used the terms machine and robot interchang­eably. Much commercial effort has gone into making robots seem like humans and lifelike. However cute or even endearing a robot might seem (not the notion suggested by some film-makers), it is still a machine or a computer programmed to respond to certain questions or circumstan­ces. It might have the capacity to learn; it might have the capacity to respond far faster than a human; and it will almost certainly be able to sift through informatio­n more rapidly than a human. But it is, in the end, a machine, without human consciousn­ess and values. Some thought has been given to making ethical robots with weaponry but the challenges in design are formidable.

Many countries see the developmen­t of artificial intelligen­ce as the new frontier and central to their own economic developmen­t. Russian President Vladimir Putin has voiced that view in opposing any ban on autonomous weapons.

New Zealand has yet to formulate its full response to the developmen­t of autonomous weapons. It attended the August UN meeting. The guidelines New Zealand observes are that any weapons in its armoury must be able to conform to the requiremen­ts of internatio­nal humanitari­an law, so it would not deploy any weapon in which a human was out of the loop. The Pentagon adheres to the same code.

One of the reasons little progress has been made in considerin­g the banning of autonomous weapons is that it is not known exactly what weapons would be covered by any ban. Some have not yet been developed. Neverthele­ss, such weapons are on the way and whatever else artificial intelligen­ce has in store for us, the issues the weapons raise go to the heart of being human and will not go away if we do not think about them. The race between control and deployment has already started.

Whether legal responsibi­lity could be sheeted home if a robot was acting independen­tly is an interestin­g point.

 ??  ??
 ??  ??
 ??  ?? AI doves: far left, Stephen Hawking, Elon Musk.
AI doves: far left, Stephen Hawking, Elon Musk.
 ??  ?? Clockwise from above, a 2006-vintage sentry robot built by Samsung; autonomous weapons backer Vladimir Putin; opponents Mary Wareham and Stuart Russell.
Clockwise from above, a 2006-vintage sentry robot built by Samsung; autonomous weapons backer Vladimir Putin; opponents Mary Wareham and Stuart Russell.
 ??  ??
 ??  ??
 ??  ??
 ??  ?? Stuart McMillan is a senior fellow in the Centre for Strategic Studies at Victoria University of Wellington.
Stuart McMillan is a senior fellow in the Centre for Strategic Studies at Victoria University of Wellington.

Newspapers in English

Newspapers from New Zealand