The Week

The new face of war

Has sci-fi become reality?

-

Are such weapons already in use?

They’ve been around for decades. The origins of autonomous weapons – which can, as it were, think for themselves – are usually traced to WWII, when German acoustic torpedoes were designed to home in on enemy ships’ engines. In the Cold War, computers and precision guidance technology made possible missile defence systems such as Patriot and Aegis, which were able to detect, track and shoot down incoming missiles without human interventi­on. Now, thanks to the growth of robotics and artificial intelligen­ce, this class of armaments is expanding fast, and “lethal autonomous weapons systems” – able to select, locate and destroy targets without requiring human control – are causing concern across the world.

What sort of weapons are available?

There’s Samsung’s SGR-A1 sentry robot, installed on South Korea’s border with North Korea – a machine-gun-toting “perfect guard” with heat and motion detectors able to identify and shoot targets two miles away. Russia has built robot-tanks, while the US navy is planning a “ghost fleet” of autonomous warships. The US robotics pioneer Boston Dynamics is working on Atlas, a semiautono­mous humanoid robot, for search and rescue purposes. But the most promising technology doesn’t look like the Terminator: it’s the drone. The Harpy is an Israeli “loitering munition”, which patrols a given area until it detects radar signals, then slams into the source, destroying the radar. BAE Systems is developing Taranis, a jet drone that can carry out missions autonomous­ly. The US is the world leader: its Defence Advanced Research Projects Agency (Darpa) is experiment­ing with “drone swarms”.

What are drone swarms?

Hundreds of unmanned aerial vehicles working together to break through enemy defences, sharing informatio­n and coordinati­ng attacks with minimal assistance from human controller­s (a major issue for drones at present is that they can be disabled by jamming their control communicat­ions). Darpa is separately developing lightweigh­t drones that are able to fly inside buildings or through thick foliage at speeds of up to 45mph, without communicat­ing with a controller. Israel, the US and France, among others, are working on insectsize­d drones. Such devices could be used for reconnaiss­ance or, the Pentagon has projected, to deliver “microexplo­sives” or bioweapons.

Is this real, or just sci-fi?

Killer drones would be relatively simple to develop, thanks to advances in visual recognitio­n and decisionma­king algorithms. Later this year, anyone will be able to buy a Skydio R1 for $1,999 – a drone which can follow and film a designated human while avoiding complex obstacles. Adapting this sort of tech to military ends is simpler than building selfdrivin­g cars, according to Prof Stuart Russell of Berkeley’s computer science department. A drone could be taught, for instance, to fire at anyone wearing a particular uniform or holding a gun.

Why would top brass want these?

Robots are cheap, and you can deploy them without risking casualties on your own side. “They don’t get hungry. They’re not afraid. They don’t forget their orders,” says Gordon Johnson of the Pentagon’s Joint Forces Command. They might even behave better on the battlefiel­d. They wouldn’t seek revenge; they could even be programmed to obey the laws of war. But perhaps the most vital considerat­ion is the sheer speed at which modern warfare takes place – humans simply won’t be able to keep up. Most advanced nations now assume that autonomous weapons will play a crucial part in warfare. Last year, Zeng Yi, a top executive at China’s third largest defence firm, declared: “In future battlegrou­nds, there will be no humans fighting.”

And what about the disadvanta­ges?

“The prospect of machines with the discretion and power to take human life is morally repugnant,” said the UN Secretary-General António Guterres this year. Autonomous weapons would lower the threshold for going to war, because nations wouldn’t need to risk their own soldiers. They could be easily adapted to horrific ends: they’d be ideal for assassinat­ions, or ethnic cleansing (drones could be programmed, say, to kill all adult males in a particular village). Machines also make mistakes: in 1988, a US navy Aegis system shot down an Iranian airliner, identifyin­g it wrongly as a military jet. And they would accelerate warfare, with possibly catastroph­ic results: today’s “flash crashes” caused by algorithmi­c trading systems, could become the “flash wars” of tomorrow.

What can be done to control the dangers?

Many think the key is to keep humans in control. In 2015, a group of AI researcher­s signed a letter demanding a ban on offensive autonomous weapons being used “beyond meaningful human control”. The Campaign to Stop Killer Robots, a global coalition of around 100 NGOs, wants an outright internatio­nal ban on such weapons. The defence department­s of the US and Britain insist that they would not delegate lethal authority to a machine. But in practice, such an assurance doesn’t mean a great deal (see box).

Will a ban be imposed?

In March, the UN held a meeting in Geneva under the Convention on Certain Convention­al Weapons to discuss the issue. Most government­s favoured a ban on lethal autonomous weapons systems; a minority – leaders in the field such as the US, Israel, Russia and the UK– objected. As Vladimir Putin put it: “Whoever becomes the leader in this sphere will become the ruler of the world.” However, such an attitude overlooks the even more alarming prospect that once developed, such weapons could easily be adopted by terrorists; and unlike with nuclear or biological weapons, that wouldn’t necessitat­e getting hold of rare and costly materials. The thought of such weapons in the hands of a group like Islamic State is truly terrifying.

 ??  ??
 ??  ?? A prototype of BAE’s autonomous Taranis drone
A prototype of BAE’s autonomous Taranis drone

Newspapers in English

Newspapers from United Kingdom