Researchers fear EU may fund killer robot work
“EuroSWARM” sounds like something out of Nigel Farage’s nightmares.
In fact, it is a European Unionfunded research project that has experimented with drones, remotecontrolled cars and other sensors to create an autonomously behaving “swarm” of bots that can communicate with each other.
In a demonstration scenario, researchers set the swarm to check out a “suspicious-looking” vehicle, explained Hyo-sang Shin, reader in guidance, navigation and control at Cranfield University, one of the project partners. The idea is that the swarm could be used for scouting an area before troops are deployed, he said.
The project, which came to an end in November last year, did not equip any of the drones or cars with weapons. The swarm is rather about “maximising the information you can collect”, said Dr Shin.
But EuroSWARM’s military uses have critics worried. It is one of the first trial projects in a new era of EU-funded military research; the budget for similar activities is set to explode over the next decade.
This funding splurge, triggered by fears of European backwardness in military technology, has seen the global debate around research into “lethal autonomous weapons” (Laws) – colloquially known as “killer robots” – move to Brussels.
“Although the EU hasn’t given any funding (yet) to ‘killer robots’ in the strict sense,” said Bram Vranken, a researcher at Vredesactie, a Belgian peace organisation, “it is clearly prioritising robotic systems which are pushing the boundaries towards increasingly autonomous systems”, such as swarm systems or “integrated and autonomous surveillance technology”.
Vredesactie is one of several groups, hailing from Germany, Italy, the UK and Spain, that have formed Researchers for Peace to campaign against what they call the “further militarisation of the European research budget”. The group accuses the EU of developing autonomous weapons “without any public debate”. So far, more than 600 researchers have signed a petition in support.
Aside from EuroSWARM, Mr Vranken said that he was also worried about Ocean 2020, a
E35 million (£30.8 million) project that aims to “integrate drones and unmanned submarines into fleet operations”. The project, led by Leonardo, an Italian weapons contractor, involves several European ministries of defence, plus the Fraunhofer Society, a German applied research network.
These projects are potentially just the beginning. Earlier this month, the EU announced its spending plans for 2021-27, and pledged
€ 13 billion over the period for the European Defence Fund, even more than was expected. Of this funding,
E4.1 billion will be set aside explicitly for research, a huge leap in resources compared with now, with the rest spent on development.
This will place the EU “among the top four” defence research and technology investors in Europe, according to the European Commission. However, this will still be peanuts compared with the US, where the Department of Defense is spending about $16 billion (£11.8 billion) a year on science and technology.
The fight in Brussels is now over how this money should be used. In 2014, the European Parliament was one of the first bodies to take seriously warnings about “killer robots”, calling on member states to “ban the development, production and use of fully autonomous weapons which enable strikes to be
carried out without human intervention”.
In February this year, MEPs amended proposals from the commission – the EU’s executive arm – to prevent EU funds being spent on “fully” autonomous weapons that “enable strikes to be carried out without meaningful human intervention and control”. Asked whether it supports this prohibition, a European Commission spokeswoman declined to comment on the record. For now, it is not clear if the MEPs’ prohibition will stand.
Those pushing for increased EUwide military research point out that the Continent lags behind rivals when it comes to developing new military technologies such as drones.
But this is not an argument that impresses Laëtitia Sédou, EU programme officer at the European Network Against Arms Trade. “One of the reasons [for the creation of the EU] is to try and prevent going back into this arms race,” she said.
Despite an international effort by the Campaign to Stop Killer Robots, governments are yet to agree to a ban on weapons where humans no longer have “meaningful control” over the use of force. What, if anything, can universities and researchers do in the meantime?
One option is to boycott institutions seen to be taking their research too far. In March, dozens of researchers threatened to boycott the Korea Advanced Institute of Science and Technology, a university in South Korea, after it opened a Research Center for the Convergence of National Defense and Artificial Intelligence with an arms company. This spurred a pledge from KAIST’s president that the university would avoid developing “autonomous weapon[s] lacking meaningful human control”.
But this poses the question of how far scientists should collaborate with research projects that get close to – but stop short of – creating a fully autonomous weapon; there are a huge range of processes that can be automated beforehand, some more ethically challenging than others.
The “biggest ethical issue” is automating the decision to fire, said Stuart Parkinson, executive director of Scientists for Global Responsibility, a UK-based organisation with about 750 members. But automatic take-off and landing for drones is arguably “less problematic”, he said.
These complexities mean that “it’s hard to say this project is ethical; this is not”, Dr Parkinson added. For this reason, universities need to make sure that researchers are ethically trained, while ethicists should be included in research teams, he said.
Blood on the lab floor
As with any area of fast developing research, a decent proportion of research spending should be devoted
to looking into how the technology might be misused, Dr Parkinson argued. And at the moment “we don’t have that”, he said.
And when in doubt over the ethics of a project, just look at the funders, Dr Parkinson advised. If your backers are military, “whatever you do will be sucked into that world”, he said.
Once an electrical engineer, Dr Parkinson left the field after concluding that it was simply too dominated by military research funders. For some academics, “maybe it’s time to look for a different direction”, he said.
But there will be no shortage of young researchers willing to take the place of the disenchanted, hence the need for the military funders
themselves to abide by proper research ethics guidelines, Dr Parkinson pointed out.
For his part, Dr Shin acknowledged that his EuroSWARM project might one day be a building block of a lethal autonomous weapon system, but argued that “any technology can be dangerous”.
He said that he would “probably” agree to work on a research project that actually involved weapons. “But I would restrict myself to things that might benefit or reduce risk to human troops or [reduce] civilian casualties,” Dr Shin added. He is against drones ever using their own judgement to fire.
Proper regulation, rather than academic boycotts such as the one proposed against KAIST, are likely to be more effective, Dr Shin said.
It will be “years rather than decades” before drones are able to fire on their own initiative, said Dr Parkinson, although then their “reliability will be in the eye of the beholder”.
But in a sense, fully autonomous weapons are already with us: the Korean border already has machinegun turrets that can in theory fire automatically on movement, Dr Parkinson said (although the South Korean military has reportedly made sure that a human has to authorise any attack). He warned: “That’s an example of where something is already happening.”
Self-rule drones will fire on their own initiative in ‘years rather than decades’