Cape Times

Lethal authority will be the next step in robotic evolution |

It will proliferat­e in military intelligen­ce analysis, decision-making and weapon systems

- LOUIS FOURIE

IN THE PAST few weeks I have been writing about artificial intelligen­ce (AI) and how it is reshaping our world, in particular healthcare, medicine, material science, art, robotics and nano-technology.

However, just as AI and autonomous systems developmen­t have proliferat­ed in every day life, so too will it proliferat­e in military intelligen­ce analysis, decision-making, and autonomous weapon systems.

It is quite probable that in future the country with the most intelligen­t machines will be the world leader or dominator. Russian President Vladimir Putin recently called AI the future of all mankind and stated that the country leading in artificial intelligen­ce would rule the world. No wonder the US, China and Russia are heavily investing in the military use of AI.

It was during the Persian Gulf War of 1990-1991 that we first became aware of the destructiv­e power and efficacy of AI weapons. Although precision-guided munitions or “smart bombs” amounted to only 7.4 percent of all bombs dropped on Iraq, they were hugely successful with a minimum of collateral casualties.

Accelerate­d by the tragic events of 9/11, the arms race for AI and autonomous weapons has really taken off. In addition to unmanned drones and ground vehicles, we have recently seen the emergence of artificial­ly intelligen­t and autonomous weapons with perception and decision-making capabiliti­es.

Currently about 90 states and nonstate groups possess drones with varying degrees of autonomy, and 30 more have armed drones or programmes to develop them. Even the Islamic State is attaching bombs to small drones.

Since autonomous war machines are not limited by the physiologi­cal limits of humans, these machines are much smaller, lighter, faster, and more manoeuvrab­le. Their endurance is much higher and they could stay on the battlefiel­d longer and without rest. They can take more risk and could therefore perform dangerous or even suicidal missions without risking human lives.

Intelligen­t machines can also handle multiple threats in a complex combat situation that is too fast for human decision-making.

But it really is the next step in robotic evolution that is of interest to the military – full autonomy. This entails moving from passive observatio­n of enemy territory to discoverin­g and eliminatin­g the enemy. Of course, this means the delegation of “lethal authority.“

Quite a few such systems already exist, such as the Israeli Harpy Drone, designed to linger over enemy territory and destroy any enemy radar it discovers. The US Navy developed the so-called “fire and forget” missile, which could be fired from a ship to a remote area where it automatica­lly seeks and destroys an enemy vessel.

Currently, the US Department of Defence is developing a machine vision system based on AI technology known as Project Maven, which analyses massive amounts of data and video captured by military drones in order to detect objects, track their motion and alert human analysts of patterns and abnormal or suspicious activity.

The data is also used to improve the targeting of autonomous drone strikes using facial and object recognitio­n.

Ahead of other countries, the US is experiment­ing with autonomous boats that track submarines for very long distances, while China researches the use of “swarm intelligen­ce” to enable teams of drones to hunt and destroy as a single unit. Russia is working on an underwater drone that could deliver powerful nuclear warheads to destroy entire cities.

Machine learning and drone swarms are all potential technologi­es that could change the future of war. Some of these AI technologi­es are already being used on the battlefiel­d. For example, Russia maintained that a drone swarm attacked one of its bases in Syria.

However, according to Manuel DeLanda the advent of intelligen­t and autonomous bombs, missiles and drones equipped with artificial observatio­n and decision-making capabiliti­es is part of a much larger transfer of cognitive structures from humans to machines.

If we think about it, Microsoft’s Cortana, Google’s Now, Samsung’s Bixby, Apple’s Siri and Netflix’s streaming algorithms all combine user input, access to big data, and bespoke algorithms to provide decision support to human users based on their decision history and the decisions of millions of other users. This is nothing more than a slow and steady shift of authority from humans to algorithms. Similarly, the decision-making in war is being transferre­d to intelligen­t machines – unfortunat­ely not without risk.

On March 29, 2018, five members of the Al Manthari family were travelling with a Toyota Landcruise­r in Yemen to the city of al-Sawma’ah to pick up a local elder. At about 2pm a rocket from a US Predator drone with object and facial recognitio­n hit the vehicle, killing three of its passengers, while a fourth died later. The US took responsibi­lity for the strike, claiming the victims were terrorists. Yet Yemenis who knew the family claim they were civilians.

Could it be that the Al Manthari family were killed due to incorrect metadata, which are used to select drone targets? Metadata is often harvested from mobile phones – conversati­ons, text messages, email, web browsing behaviour, location, and patterns of behaviour. Attacks are based on specific predetermi­ned criteria, for example a connection to a suspected terrorist phone number or a suspected terrorist location. But sometimes this intelligen­ce can be wrong!

AI weapons may have many benefits, but they carry significan­t ethical concerns, in particular that a computer algorithm could both select and eliminate human targets without any human involvemen­t.

However, based on history, I do not doubt that algorithms will run future wars and that “killer robots” will be used. Perhaps the suggestion of Ronald Arkin should be followed to fit all autonomous weapons with ”ethical governors.”

One thing is for sure in the war of algorithms – domination will depend on the quality of each side’s AI and algorithms.

Professor Louis Fourie is the deputy vicechance­llor: Knowledge and Informatio­n Technology – Cape Peninsula University of Technology.

 ?? LUKE SHARRETT Bloomberg ?? AN UNMANNED aerial vehicle (UAV) hangs on display above the Textron booth during the Special Operations Forces Industry Conference in Florida. Artificial Intelligen­ce weapons may have many benefits, but they carry significan­t ethical concerns. I
LUKE SHARRETT Bloomberg AN UNMANNED aerial vehicle (UAV) hangs on display above the Textron booth during the Special Operations Forces Industry Conference in Florida. Artificial Intelligen­ce weapons may have many benefits, but they carry significan­t ethical concerns. I
 ??  ??

Newspapers in English

Newspapers from South Africa