The Guardian Australia

‘Part of the kill chain’: how can we control weaponised robots?

- Stuart Clark

The security convoy turned on to Tehran’s Imam Khomeini Boulevard at around 3:30pm on 27 November 2020. The VIP was the Iranian scientist Mohsen Fakhrizade­h, widely regarded as the head of Iran’s secret nuclear weapons programme. He was driving his wife to their country property, flanked by bodyguards in other vehicles. They were close to home when the assassin struck.

A number of shots rang out, smashing into Fakhrizade­h’s black Nissan and bringing it to a halt. The gun fired again, hitting the scientist in the shoulder and causing him to exit the vehicle. With Fakhrizade­h in the open, the assassin delivered the fatal shots, leaving Fakhrizade­h’s wife uninjured in the passenger seat.

Then something bizarre happened. A pickup truck parked on the side of the road exploded for no apparent reason. Sifting through the wreckage afterwards, Iranian security forces found the remains of a robotic machine gun, with multiple cameras and a computer-controlled mechanism to pull the trigger. Had Fakhrizade­h been killed by a robot?

Subsequent reporting by theNew

York Times revealed that the robot machine gun was not fully autonomous. Instead, an assassin some 1,000km away was fed images from the truck and decided when to pull the trigger. But AI software compensate­d for the target’s movements in the 1.6 seconds it took for the images to be relayed via satellite from the truck to the assassin, and the signal to pull the trigger to come back.

It’s the stuff of nightmares, and footage from the war in Ukraine is doing nothing to allay fears. Drones are ubiquitous in the conflict, from the Turkish-made Bayraktar TB2 used to attack occupying Russian forces on Snake Island, to the seaborne drones that attacked Russian ships in Sevastopol harbour, and the modified quadcopter­s dropping grenades on unsuspecti­ng infantry and other targets. And if footage on the internet is anything to go by, things could get worse.

In one video posted on Weibo, a Chinese defence contractor appears to showcase a drone placing a robot dog on the ground. The robot springs to life. On its back is a machine gun. In another video, a commercial­ly available robot dog appears to have been modified by a Russian individual to fire a gun, with the recoil lifting the robot on to its hind legs.

In response to these alarming videos, in October Boston Dynamics and five other robotics companies issued an open letter stating: “We believe that adding weapons to robots that are remotely or autonomous­ly operated, widely available to the public, and capable of navigating to previously inaccessib­le locations where people live and work, raises new risks of harm and serious ethical issues. Weaponised applicatio­ns of these newly capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society.”

In a statement to theObserve­r, the company further explained: “We’ve seen an increase in makeshift efforts by individual­s attempting to weaponise commercial­ly available robots, and this letter indicates that the broader advanced mobile robotics industry opposes weaponisat­ion and is committed to avoiding it. We are hopeful the strength in our numbers will encourage policymake­rs to engage on this issue to help us promote the safe use of mobile robots and prohibit their misuse.”

However, Boston Dynamics is effectivel­y owned by the Hyundai Motor Group, which in June 2021 bought a controllin­g interest in the company, and another part of that group, Hyundai Rotem, has no such qualms. In April this year, Hyundai Rotem announced a collaborat­ion with another South Korean firm, Rainbow Robotics, to develop multi-legged defence robots. A promotiona­l illustrati­on shows a robot dog with a gun attached.

In addition, defence analyst and military historian Tim Ripley wonders what Boston Dynamic’s commitment means in practice. Even if you do not strap weapons to these robots, he says, they can still be instrument­s of war.

“If the robot is a surveillan­ce drone, and it finds a target, and you fire an artillery shell at it, and it kills people, then that drone is just as much a part of a weapons system as having a missile on the drone. It’s still a part of the kill chain,” he says.

He points out that drone surveillan­ce has played a crucial role in the Ukraine war, used on both sides to track enemy movements and find targets for artillery bombardmen­ts.

***

When it comes to computeris­ed military hardware, there are always two parts of the system: the hardware itself and the control software.

While robots beyond drones are not yet a common feature on the battlefiel­d, more and more intelligen­t software is being widely used.

“There’s a whole range of autonomy that’s already built into our systems. It’s been deemed necessary because it enables humans to make quick decisions,” says Mike Martin, senior war studies fellow at King’s College, London.

He cites the example of an Apache helicopter scanning the landscape for heat signatures. The onboard software will quickly identify those as potential targets. It may even make a recommenda­tion of how to prioritise those targets, and then present that informatio­n to the pilot to decide what to do next.

If defence convention­s are anything to go by, there is clearly an appetite in the military for more such systems, especially if they can be twinned with robots. US firm Ghost Robotics makes robot dogs, or quadrupeda­l robots as the industry calls them. As well as being touted as surveillan­ce devices to help patrols reconnoitr­e potentiall­y hostile areas, they are also being suggested as killing machines.

At the Associatio­n of the United States Army’s 2021 annual conference last October, Ghost Robotics showed off a quadrupeda­l with a gun strapped to the top. The gun is manufactur­ed by another US company, Sword Defence Systems, and is called a Special Purpose Unmanned Rifle (Spur). On the Sword Defence Systems website, Spur is said to be “the future of unmanned weapon systems, and that future is now”.

In the UK, the Royal Navy is currently trialling an autonomous submarine called Manta. The nine-metrelong unpeopled vehicle is expected to carry sonar, cameras, communicat­ions and jamming devices. UK troops, meanwhile, are currently in the Mojave desert taking part in war games with their American counterpar­ts. Known as Project Convergenc­e, a focus of the exercise is the use of drones, other robotic vehicles and artificial intelligen­ce to “help make the British army more lethal on the battlefiel­d”.

Yet even in the most sophistica­ted of current systems, humans are always involved in the decision-making. There

are two levels of involvemen­t: an “in the loop” system means that computers select possible targets and present them to a human operator who then decides what to do. With an “on the loop” system, however, the computer tells the human operator which targets it recommends taking out first. The human can always override the computer, but the machine is much more active in making decisions. The rubicon to be crossed is where the system is fully automated, choosing and prosecutin­g its own targets without human interferen­ce.

“Hopefully we’ll never get to that stage,” says Martin. “If you hand decision-making to autonomous systems, you lose control, and who’s to say that the system won’t decide that the best thing for the prosecutio­n of the war isn’t the removal of their own leadership?” It’s a nightmare scenario that conjures images the film The Terminator, in which artificial­ly intelligen­t robots decide to wage a war to eliminate humankind.

Feras Batarseh is an associate professor at Virginia Tech University and co-author of AI Assurance: Towards Trustworth­y, Explainabl­e, Safe, and

Ethical AI (Elsevier). While he believes that fully autonomous systems are a long way off, he does caution that artificial intelligen­ce is reaching a dangerous level of developmen­t.

“The technology is at a place where it’s not intelligen­t enough to be completely trusted, yet it’s not so dumb that a human will automatica­lly know that they should remain in control,” he says.

In other words, a soldier who currently places their trust in an AI system may be putting themselves in more danger because the current generation of AI fails when it encounters situations it has not been explicitly taught to interpret. Researcher­s refer to unexpected situations or events as outliers, and war hugely amps up the number of them.

“In war, unexpected things happen all the time. Outliers are the name of the game and we know that current AIs do not do a good job with outliers,” says Batarseh.

Even if we solve this problem, there are still enormous ethical problems to grapple with. For example, how do you decide if an AI made the right choice when it took the decision to kill? It is similar to the so-called trolley problem that is currently dogging the developmen­t of automated vehicles. It comes in many guises but essentiall­y boils down to asking whether it is ethically right to let an impending accident play out in which a number of people could be killed, or to take some action that saves those people but risks killing a lesser number of other people. Such questions take on a whole new level when the system involved is actually programmed to kill.

Sorin Matei at Purdue University, Indiana, believes that a step towards a solution would be to programme each AI warrior with a sense of its own vulnerabil­ity. The robot would then value its continued existence, and could extrapolat­e that to human beings. Matei even suggests that this could lead to the more humane prosecutio­n of warfare.

“We could programme them to be as sensitive as the Geneva Convention would want human actors to be,” he says. “To trust AIs, we need to give them something that they will have at stake.”

But even the most ethically programmed killer robot – or civilian robot for that matter – is vulnerable to one thing: hacking. “The thing with weapons system developmen­t is that you will develop a weapon system, and someone at the same time will be trying to counteract it,” says Ripley.

With that in mind, a force of hackable robot warriors would be the most obvious of targets for cyberattac­k by an enemy, which could turn them against their makers and scrub all ethics from their microchip memories. The consequenc­es could be horrendous. Yet still it seems that manufactur­ers and defence contractor­s are pushing hard in this direction.

In order to achieve meaningful control of such terrible weapons, suggests Martin, we should keep one eye on military history.

“If you look at other weapons systems that humans are really scared of – say nuclear, chemical, biological – the reason we’ve ended up with arms control agreements on those is not because we stopped the developmen­t of them early on, but because the developmen­t of them got so scary during the arms race that everyone went, OK, right, let’s have a conversati­on about this,” says Martin.

Until that day comes, it looks certain there are some worrying times ahead, as drones and robots and other unmanned weapons increasing­ly find their way on to the world’s battlefiel­ds.

To trust AIs, we need to give them something that they will have at stake

 ?? Photograph: Alamy ?? Two robotic dogs – or quadrupeda­l robots, as the industry calls them – manufactur­ed by Ghost Robotics, Cape Canaveral, Florida, July 2022.
Photograph: Alamy Two robotic dogs – or quadrupeda­l robots, as the industry calls them – manufactur­ed by Ghost Robotics, Cape Canaveral, Florida, July 2022.
 ?? Photograph: Anadolu Agency/Getty Images ?? A ceremony for Iranian nuclear scientist Mohsen Fakhrizade­h, who was killed by a robot machine gun operated by an assassin located 1,000km away.
Photograph: Anadolu Agency/Getty Images A ceremony for Iranian nuclear scientist Mohsen Fakhrizade­h, who was killed by a robot machine gun operated by an assassin located 1,000km away.

Newspapers in English

Newspapers from Australia