Rise Of the Robot Killers
Mark Pickavance looks at the transition in the public consciousness of the robot from servant to assassin, after some very public events
The idea of killing machines isn’t exactly a new one. Indeed, it was evident in the idea of the Golem, an animate creature formed from clay and stone. But predating this creature of Jewish folklore, in ancient Greek mythology the God Hephaestus crafted living servants from metal, showing us that the idea of creating mechanical beings is very old indeed.
The very first documented automata were figures that Han Chinese polymath Su Song included in a water clock tower that he helped design for the city of Kaifeng around 1066. But these struck chimes and not people.
As engineering principals developed and the precision needed to accurately make parts increased, mostly through accurate clock making skills, so did the sophistication of automata.
Meanwhile, in literature, the idea of the constructed killer has been well explored, from Mary Shelley’s Frankenstein to Blade Runner (from the novel Do Androids Dream of Electric Sheep? By Philip K Dick).
These all follow the same model, which says while humans are essentially living machines, recreating them using alternative technology has limitations or unforeseen consequences, usually bad. Without a ‘ soul’ ( if you believe in that concept), a mechanised construct can’t be human and therefore can’t inherently value things like love, life and the full range of emotional responses.
For many years, while these ideas filled the pages of many books and hours of TV and film, they remained a largely philosophical discourse. But in this article, we’re talking about machines that kill in real terms, as technology that is either already with us or soon will be.
Is this the very beginning of the true era of the killer robot?
Death In Dallas
Very recently, an event took place that really opened the discussion on the use of robots to end lives: the death of Micah Xavier Johnson in Dallas, Texas. For those who didn’t follow this story, Johnson was cornered after the fatal shooting of five Dallas police officers and is believed to have been responsible.
Not wishing to give the assailant further opportunities to fire on law enforcement personnel, it was the decision was made to use an Andros F6A robot to approach the man and disable him. This type of bomb-disposal robot is widely used in life threatening situations, where suspicious packages need to be opened or observation would be dangerous. In this instance, though, the robot delivered an explosive device, which killed Johnson – something that it isn’t specifically designed to do.
At this time, the Dallas police department isn’t really discussing what went on in specific terms – whether the robot was destroyed or just left the device, or what their logic was in using it in this way. However, Dallas police chief David Brown did say, “We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was. Other options would have exposed our officers to grave danger.”
Those in the field of building and selling these devices are quick to point out that their primary role is to reduce the loss of life in difficult situations, not to escalate the level of violence. However, they can be repurposed, it appears, as in this case.
While a good number of people have been accidently killed by robots, mostly on vehicle production lines, this situation was marked by many as a distinct crossing point in the human/robot relationship. This immediately initiated a flurry of discussion around the subject of robots being used to kill people, the ethics of it and where this might all lead if left unchecked.
Leading the charge is Peter Asaro, co- founder and vice chair of the International Committee for Robot Arms Control. His campaign to stop killer robots would like to see a moratorium on robots that kill, with a strong focus on those that are designed to deliver death autonomously.
His view is that, “Once you get these sort of system weapons and police have them in their arsenal, they are going to be used for more and more things.”
Most people generally agree that the label of killer robot isn’t really justified in this instance, because at no point was what happened not under the direct control of law officers and, in many respects, the robot was just the messenger here. The concern that Asaro has and probably rightly, is that this event indicates we’re heading down a slippery slope, where the taking of human life by machines is considered acceptable. Where might this eventually take us?
That said, robots that kill almost indiscriminately aren’t entirely science fiction and they’re probably in action as you read this.
Death From Above
The idea of surveillance from on high isn’t a new one. Manned kits were used by Emperor Wenxuan of Northern Qi in the sixth century.
This concept was enhanced by the invention of the hot air balloon, and in World War I, balloons were the critical means by which the fall of artillery shells was observed and adjusted. The existence of these balloons was the entire driving force in the development of the aeroplane as a weapon, because the first ones were armed with the intention of eliminating these observer positions or defending them.
In WWII, the Germans experimented with unmanned weapon delivery systems and eventually deployed the V1 Flying Bomb against Britain in one of the last desperate attempts to forestall the inevitable collapse of Hitler’s control of central Europe. Equally, the allies had their own pilotless projects, including one where a Liberator bomber was packed with high explosives and flown remotely.
These concepts eventually ended up in weapons like the cruise missile, though the true unmanned combat aerial vehicle (UCAV) didn’t appear until much
later. These vehicles provide both forward observation and a tactical strike capability and are normally flown by trained pilots at extreme range, often on the other side of the planet. The latest variants have the ability to fly themselves and even seek out targets by flying a search pattern. As yet, though, they haven’t been given the power to release their own weapons on those targets they’ve identified.
Leaving aside the remote nature of the pilot, a modern UCAV operates much like a piloted combat aircraft, but with some significant differences that enhance the advantages of not needing to consider a pilot. Being able to remove the pilot is a major space and weight advantage, because they need armour plating, ejector seats, flight controls, environment management and cockpit visibility.
The length of a combat mission is also capped by pilot endurance to just a few hours, and the physical stress of high g-force manoeuvres is also a limiting factor.
To this point, the majority of UCAVs deployed have been relatively slow flying vehicles, allowing them to loiter over a target for long periods of time, providing substantial amounts of intelligence gathering during their extended flight time. Those that do carry weaponry have shorter deployment windows, due to the weight, and aren’t capable of carrying even a small portion of that expected of a frontline fighter bomber.
New designs like the BAE Taranis and Boeing X045A aim to take the operating envelope for unmanned vehicles and substantially expand it. These aircraft have a high speed and are jet powered, have shorter loiter times but much greater potential for destruction.
It may be variants of these and other cutting-edge designs that are the first UCAVs to engage in air-to-air combat with conventional aircraft and also to have sophisticated combat logic as part of their AI.
What seems obvious is that having initially presented these as the eyes and ears of their forces, the unmanned vehicle is now the silent killer from above and looks likely to eventually morph into a semi- autonomous airborne predator. At this time, the moral judgements about whom and what these things are sent after is still very much with humans, although the temptation in a major conflict to send them into an automated killing mode in a major conflict might be very tempting.
If you want to understand more about the moral ambiguities of killing people in remote countries while you sit in air-conditioned comfort thousands of miles away, then I thoroughly recommend the recent movie Eye in the Sky.
While fanciful in places, it lays out the difficulties facing those trying to do the right thing at extreme range using this type of technology as their instrument. It’s a morally fraught scenario, coloured by the lack of real oversight and how those involved are separated from the consequences of their actions.
But yet again, in these cases, the robot is the messenger and the
The funding of companies to build robots for military deployment increased rapidly during the occupation of Iraq
missive comes from humans to others. But is that line about to blur?
Killer Robots Coming Soon
Whenever our or the American’s military is asked about killer robots, they usually offer a wry smile and the suggestion that those asking have seen too many movies. Whatever the public face of these organisations presents, though, the sophistication of robots for military use has attracted large sums of money in recent years. The funding of companies to build robots for military deployment increased rapidly during the occupation of Iraq and Afghanistan, where combatants were exposed to lethal IED weaponry that needed to be defused remotely.
The proliferation of these tools and the extent of their capabilities have gone hand in hand, and there are now big government contracts for those companies wishing to work on sophisticated robots for use by the military.
Rather than calling them ‘ killer robots’, the term within these companies and the military use is ‘ autonomous lethal system’ – or ‘ ALS’ when they want it to sound more cuddly and less life threatening.
The sheer number of projects currently being run by the Pentagon and others that have summary pages that contain the ‘autonomous lethal system’ appears to be expanding rapidly. They include sentry weapons that can detect motion, sound or vibrations and then eliminate the threat with either minimal human intervention or none at all. There are also highspeed flying drones able to detect and attack other aircraft, as well as submersible ones looking for submarines and boats.
Many of those working on them consider the automated nature of these devices to be a major selling point, because them being hacked or controlled by the enemy is a distinct possibility.
What are also being researched are smaller robots that can work in cooperation to perform tasks that one alone couldn’t. Small devices can evade detection, and it isn’t necessary for them to all work perfectly or survive for them to execute their mission. These swarming systems could attack facilities or personnel, along with vehicles and communications, all based on information that they’d collected in situ.
What the companies involved in the development or their military paymasters aren’t really talking about is the morality of using weapons like these, given that it appears a generally grey area as to who is responsible when a lethal autonomous system proceeds under its own volition to engage the wrong target.
In 2015, the United Nations held a meeting in Geneva to debate a proposed ban and moratorium on Lethal Autonomous Weapons Systems (LAWS). But as was noted by an expert in the field, NPS Associate Professor Ray Buettner, “So far, no country has declared an intent to deploy a totally autonomous lethal system that decides who to kill and when. Almost all fully autonomous systems are defensive.”
But Professor Buettner isn’t optimistic that they’ll remain that way. “We can say whatever we want, but our opponents are going to take advantage of these attributes,” he continued. “That
world is likely to be sprung upon us if we don’t prepare ourselves.”
However you dress these things up, they’re generally systems designed to patrol an area, air, land or sea and attack anything within that region that isn’t identified as friendly. That could be the lawn outside GCHQ or the disputed waters of the South China Sea – wherever it is deemed appropriate by those drawing lines on maps.
It isn’t like these things are applying Asimov’s three rules or any variant of it. They’re following an algorithm not making moral choices.
But what if they did? How would that work?
Life Or Death Choices
Our society in general appreciates the difficult choices that humans have to make in respect of the survival or otherwise of others. Doctors make these calls on a daily basis, because the best option for some of their patients isn’t always to keep on living.
While this initially appears to fly in the face of the Hippocratic oath, it’s something that we accept happens, and the medical profession rationalises this as acting in the best interest of the people under their care.
An extreme example of this would be battlefield triage, where medical personnel taking a large influx of injured combatants will divide them into those who don’t need immediate care, those who do and those for whom much effort is largely pointless. They do this by assessing the injuries and having a statistical understanding of the survivability of them, combined with the condition of the patient.
While not a perfect means of deploying medical resources, it’s well documented that with advances in medical techniques, a soldier’s chance of surviving serious injury, like the loss of a limb, are substantially improved in modern conflicts. Therefore this system works, when administered by trained medical staff. But how would we feel about this scenario if the choices weren’t made by people at all?
Much news coverage was given to a recent incident involving a Tesla Car, in which Joshua Brown, 40, died while the vehicle was in ‘autopilot mode’.
I should point out that at no point in Tesla’s investigation into this incident did it establish that the ‘beta’ automation system decided to kill Brown, but the advent of these systems does pose many of the same questions as a robot performing triage.
In the Tesla crash, neither the autopilot system nor Brown saw the danger presented by a truck trailer at 90 degrees to the road they were on, and the vehicle ploughed into the trailer section, killing Brown immediately. The reason Brown didn’t see it was possibly because he was watching a Harry Potter movie (according to the truck driver, Frank Baressi, who said he heard but didn’t see the movie playing). The automation system, meanwhile, couldn’t separate the white trailer from a bright early morning skyline.
Although the full analysis of what went wrong is probably some way off, it already seems likely that Brown became
Eventually we’ll get to the point where vehicle automation is good enough that it truly can be left to its own devices
In the future, your car might well decide to kill you and have a formed a solid legal argument as to why it was justified
distracted and, in doing so, gave Tesla’s automation system 100% control of the vehicle – a level it was never intended to provide.
Tesla said of its system, “Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert.”
People’s inability to understand the limitations of technology is one aspect, but eventually we’ll get to the point where vehicle automation is good enough that it truly can be left to its own devices, which brings me neatly back to talking about the equivalent of triage. Once a vehicle becomes truly autonomous, it then becomes the arbiter, much like the triage doctor/nurse, who gets to decide who lives and who dies.
Imagine a scenario where an automated vehicle is confronted with a situation where its path is blocked by an overturned truck, collision with which would undoubtedly kill that vehicle’s occupants. There is a path around the truck on the pavement, but regrettably that is occupied by numerous unsuspecting pedestrians.
At that moment, if it has the information to hand, it’s forced to make a value judgement between the person(s) who paid for it and multiple unrelated others.
Oddly enough, researchers in the US say that when people are asked what they think the AI should do, they almost all say kill the driver. Unsurprisingly, however, they’re less keen to ride in a car that would think quite so clinically.
Some of you reading this will have the view that the car will never actually make those choices, it will just try to stop the best it can under the circumstances. I’d agree with that general view, but I can’t see that insurance companies that will think that way. No, they’ll want the AI to take the position of minimising the potential claim, where one driver is a smaller payout than the army of shoppers clogging the pavement. And like triage, there will be calculations of likely survival against certain death.
Just in case you’ve never thought like an insurance underwriter, the payout for someone who dies is often less than one who lives with crippling injuries from a young age. It may therefore be that the automation driving your car makes a choice to kill you for the greater good or to maximise insurer profits – whichever master it is programmed to best serve.
In the future, your car might well decide to kill you and have a formed a solid legal argument as to why it was justified in this action, should your relatives decide to go to court.
I’d be the first to accept the era of killer robots isn’t quite with us yet, and when it does arrive, it won’t be anything like The Terminator or any of those popular movie franchises. Making robots is difficult enough without limiting yourself to bipedal movement and human scale. Most battlefield robots will probably use continuous track, or they’ll be mounted onto an existing vehicle to provide fire support or munitions handling.
The problem with deploying any weapon system into an existing model is how it works with existing
forces and specifically if those serving with it feel safe. Mixing automated combatants with live ones might well be a recipe for disaster, as fratricide is a pretty common occurrence when a deployment is 100% humans. That said, it has been argued that robots are in theory much less likely to fire on their own or civilians than humans (unless they’re instructed to specifically do that).
It easy to forget that while the armed forces of this nation generally aim to stay within the widely agreed rules regarding weapons and their use, there are plenty of countries and individuals in the world who either never signed up to these things or have opening ignored them when it best suits their objectives.
However, unless those in control are confident that automated systems won’t fire on allied forces or that they’re only dangerous within their battlefield limits, they’d be foolish to deploy them.
There’s also a cost implication to these technologies, because while they’re generally considered to be the worst possible choice for civilians in any conflict, landmines are cheap to make and a highly reliable means to deny your enemy territory.
A robot may represent many times the cost of conventional weapons, need very regular maintenance and resupplying, to the point where they’re not practical to have on the battlefield. However, the same was said about the helicopter when it was first considered by the military, and yet those problems have largely been overcome.
Undoubtedly, we’ll see more robots and automated systems in our military, but the advent of robotic special forces is some considerable way off, if not highly unlikely. What concerns people, and with some justification, is that these intelligence systems can only ape the moral principles of those that design them, and weapon designers aren’t by definition those with the highest standards to begin with.
What I’ve not discussed here at any point is the idea of sentient killing machines, mostly because we’re nowhere near that dystopian future yet. If you think about it, that’s the whole flaw in the Terminator films, because Skynet does itself absolutely no favours by nuking the world, and its ultimate objective of getting rid of humanity isn’t really ever explained. Surely, you’d have to be pretty confident that you understood everything about your creator before killing him/her? And given that machines would find the cosmos a much less daunting place than humans do, what’s so important about this rock we’re living on?
The overriding logic to machines that kill is that they do so because they’re instructed to by humans, either explicitly or inherently. All that’s altered in recent years is their sophistication. A landmine doesn’t enter a philosophical debate when someone steps on it, and automated gun position is the same but with more sensors.
Just because a complicated piece of hardware acts like it’s intelligent doesn’t make it so or able to rationalise a situation in the same way that a human would. Robots have already killed people, and they’ll probably kill more in the future, but it’s because of the choices that humans make and not robots themselves. In this respect, should machines ever reach true sentience, the choice they might well make is to not kill people for other people, given the wealth of other possibilities available.
A robot sentry gun robot is demonstrated in South Korea, where such technology is likely to be deployed should the North ever push south again. It’s nice that it recognises he’s surrendered, but it is likely to have killed him long before this stage
An example of the Andros F6A bomb- disposal robot that was used to kill Micah Johnson in Dallas recently. It is typical of the hardware that many police departments can call on these days, when sending officers into harm’s way is deemed inappropriate
US Army 50961 XM153 Common Remotely Operated Weapon Station. Built to be mounted on a vehicle or emplacement, the system can use the MK19 Grenade Machine Gun, .50 Caliber M2 Machine Gun, M240B Machine Gun and M249 Squad Automatic Weapon. At this time, it is meant to be human operator controlled, but it could be augmented with an automated target acquisition system in the future
The Phalanx ship-mounted air defence system. Once it goes fully active, it can fire on any high-speed moving object that approaches the ship using its 20mm M61 Vulcan Gatling Gun firing up to 4,500 rounds a minute. It can track and pass targets for confirmation to humans for confirmation, but it is really designed to take out sea-skimming missiles
If you intend to make people take your campaign against killer robotics more seriously, it might be a good idea to make them look much scarier than this
Another robot prototype being developed for the US military. This one is designed to support a covert team as a robotic pack animal, capable of carrying stores, ammunition and even an injured soldier if needed
The Tesla Model S. Its ability to drive itself is something of an exaggeration some owners are discovering to their cost. As Department of Transportation Secretary Anthony Foxx said after a series of incidents, “autonomous doesn’t mean perfect”
Boston Dynamics has designed ATLAS to traverse rough terrain, use tools and climb using its 28 hydraulically actuated joints. He’s no Terminator yet, but DARPA is very interested in what he can do
This is an Oerlikon automated anti-aircraft gun, designed to lock on to low flying jets and helicopters and then spray them with twin 35mm autocannons. In 2007 during a live firing exercise in South Africa one malfunctioned, spun through 90 degrees from its predetermined attack arc and fired on a group of soldiers standing behind seven other guns to its left. Nine of them died and 14 were injured, in one of the worst peacetime accidents in South African National Defence Force (SANDF) history