Lethal authority will be the next step in robotic evolution
It will proliferate in military intelligence analysis, decision-making and weapon systems
IN THE PAST few weeks I have been writing about artificial intelligence (AI) and how it is reshaping our world, in particular healthcare, medicine, material science, art, robotics and nano-technology.
However, just as AI and autonomous systems development have proliferated in every day life, so too will it proliferate in military intelligence analysis, decision-making, and autonomous weapon systems.
It is quite probable that in future the country with the most intelligent machines will be the world leader or dominator. Russian President Vladimir Putin recently called AI the future of all mankind and stated that the country leading in artificial intelligence would rule the world. No wonder the US, China and Russia are heavily investing in the military use of AI.
It was during the Persian Gulf War of 1990-1991 that we first became aware of the destructive power and efficacy of AI weapons. Although precision-guided munitions or “smart bombs” amounted to only 7.4 percent of all bombs dropped on Iraq, they were hugely successful with a minimum of collateral casualties.
Accelerated by the tragic events of 9/11, the arms race for AI and autonomous weapons has really taken off. In addition to unmanned drones and ground vehicles, we have recently seen the emergence of artificially intelligent and autonomous weapons with perception and decision-making capabilities.
Currently about 90 states and nonstate groups possess drones with varying degrees of autonomy, and 30 more have armed drones or programmes to develop them. Even the Islamic State is attaching bombs to small drones.
Since autonomous war machines are not limited by the physiological limits of humans, these machines are much smaller, lighter, faster, and more manoeuvrable. Their endurance is much higher and they could stay on the battlefield longer and without rest. They can take more risk and could therefore perform dangerous or even suicidal missions without risking human lives.
Intelligent machines can also handle multiple threats in a complex combat situation that is too fast for human decision-making.
But it really is the next step in robotic evolution that is of interest to the military – full autonomy. This entails moving from passive observation of enemy territory to discovering and eliminating the enemy. Of course, this means the delegation of “lethal authority.“
Quite a few such systems already exist, such as the Israeli Harpy Drone, designed to linger over enemy territory and destroy any enemy radar it discovers. The US Navy developed the so-called “fire and forget” missile, which could be fired from a ship to a remote area where it automatically seeks and destroys an enemy vessel.
Currently, the US Department of Defence is developing a machine vision system based on AI technology known as Project Maven, which analyses massive amounts of data and video captured by military drones in order to detect objects, track their motion and alert human analysts of patterns and abnormal or suspicious activity.
The data is also used to improve the targeting of autonomous drone strikes using facial and object recognition.
Ahead of other countries, the US is experimenting with autonomous boats that track submarines for very long distances, while China researches the use of “swarm intelligence” to enable teams of drones to hunt and destroy as a single unit. Russia is working on an underwater drone that could deliver powerful nuclear warheads to destroy entire cities.
Machine learning and drone swarms are all potential technologies that could change the future of war. Some of these AI technologies are already being used on the battlefield. For example, Russia maintained that a drone swarm attacked one of its bases in Syria.
However, according to Manuel DeLanda the advent of intelligent and autonomous bombs, missiles and drones equipped with artificial observation and decision-making capabilities is part of a much larger transfer of cognitive structures from humans to machines.
If we think about it, Microsoft’s Cortana, Google’s Now, Samsung’s Bixby, Apple’s Siri and Netflix’s streaming algorithms all combine user input, access to big data, and bespoke algorithms to provide decision support to human users based on their decision history and the decisions of millions of other users. This is nothing more than a slow and steady shift of authority from humans to algorithms. Similarly, the decision-making in war is being transferred to intelligent machines – unfortunately not without risk.
On March 29, 2018, five members of the Al Manthari family were travelling with a Toyota Landcruiser in Yemen to the city of al-Sawma’ah to pick up a local elder. At about 2pm a rocket from a US Predator drone with object and facial recognition hit the vehicle, killing three of its passengers, while a fourth died later. The US took responsibility for the strike, claiming the victims were terrorists. Yet Yemenis who knew the family claim they were civilians.
Could it be that the Al Manthari family were killed due to incorrect metadata, which are used to select drone targets? Metadata is often harvested from mobile phones – conversations, text messages, email, web browsing behaviour, location, and patterns of behaviour. Attacks are based on specific predetermined criteria, for example a connection to a suspected terrorist phone number or a suspected terrorist location. But sometimes this intelligence can be wrong!
AI weapons may have many benefits, but they carry significant ethical concerns, in particular that a computer algorithm could both select and eliminate human targets without any human involvement.
However, based on history, I do not doubt that algorithms will run future wars and that “killer robots” will be used. Perhaps the suggestion of Ronald Arkin should be followed to fit all autonomous weapons with ”ethical governors.”
One thing is for sure in the war of algorithms – domination will depend on the quality of each side’s AI and algorithms. Professor Louis Fourie is the deputy vicechancellor: Knowledge and Information Technology – Cape Peninsula University of Technology.
AN UNMANNED aerial vehicle (UAV) hangs on display above the Textron booth during the Special Operations Forces Industry Conference in Florida. Artificial Intelligence weapons may have many benefits, but they carry significant ethical concerns. I Bloomberg