MoD is fund­ing drones that de­cide who to kill,

The Observer - - News - Jamie Doward

Tech­nolo­gies that could un­leash a gen­er­a­tion of lethal weapons sys­tems re­quir­ing lit­tle or no hu­man in­ter­ac­tion are be­ing funded by the Min­istry of De­fence, ac­cord­ing to a new re­port.

The de­vel­op­ment of au­ton­o­mous mil­i­tary sys­tems – dubbed “killer ro­bots” by cam­paign­ers op­posed to them – is deeply con­tentious. Ear­lier this year, Google with­drew from the Pen­tagon’s Project Maven, which uses ma­chine learn­ing to an­a­lyse video feeds from drones, af­ter eth­i­cal ob­jec­tions from the tech gi­ant’s staff.

The govern­ment in­sists it “does not pos­sess fully au­ton­o­mous weapons and has no in­ten­tion of de­vel­op­ing them”. But since 2015, the UK has de­clined to sup­port pro­pos­als put for­ward at the UN to ban them. Now, us­ing govern­ment data, Free­dom of In­for­ma­tion re­quests and open­source in­for­ma­tion, a year-long in­ves­ti­ga­tion re­veals that the MoD and de­fence con­trac­tors are fund­ing dozens of ar­ti­fi­cial in­tel­li­gence pro­grammes for use in con­flict.

“De­spite pub­lic state­ments that the UK has no in­ten­tion of de­vel­op­ing lethal au­ton­o­mous weapon sys­tems, there is tan­gi­ble ev­i­dence that the MoD, mil­i­tary con­trac­tors and uni­ver­si­ties in the UK are ac­tively en­gaged in re­search and the de­vel­op­ment of the un­der­pin­ning tech­nol­ogy with the aim of us­ing it in mil­i­tary ap­pli­ca­tions,” said Peter Burt, au­thor of the new re­port Off the Leash: The De­vel­op­ment of Au­ton­o­mous Mil­i­tary Drones in the UK – pro­duced by Drone Wars UK which cam­paigns against the de­vel­op­ment of un­manned sys­tems.

In one ex­am­ple, the re­port claims the MoD is tri­alling a “pre­dic­tive cog­ni­tive con­trol sys­tem” that has been de­ployed in live op­er­a­tions at the Joint Forces In­tel­li­gence Cen­tre at RAF Wy­ton. The sys­tem takes huge quan­ti­ties of highly com­plex data, beyond the com­pre­hen­sion of an­a­lysts, and uses deep-learn­ing neu­ral net­works to make pre­dic­tions about fu­ture events and out­comes that will be of “direct oper­a­tional rel­e­vance” to the armed forces.

This raises con­cerns about what hap­pens if a fu­ture weapon sys­tem is fed er­ro­neous data or its links to hu­man com­mand, which can block the sys­tem’s use of lethal force, are dis­rupted. Such a sce­nario is not too far off, Drone Wars be­lieves.

“We have al­ready seen the de­vel­op­ment of drones in Bri­tain which have ad­vanced au­ton­o­mous ca­pa­bil­i­ties, such as the Tara­nis stealth drone de­vel­oped by BAE Sys­tems. The de­vel­op­ment of a truly au­ton­o­mous lethal drone in the fore­see­able fu­ture is now a real pos­si­bil­ity,” Burt said.

The Tara­nis su­per­sonic stealth air- craft is an ex­per­i­men­tal drone which, ac­cord­ing to BAE, can “hold an ad­ver­sary at con­tin­u­ous risk of at­tack ... pen­e­trate deep in­side hos­tile ter­ri­tory, find a tar­get, fa­cil­i­tate ei­ther ki­netic or non-ki­netic in­flu­ence upon it, assess the ef­fect achieved, and pro­vide in­tel­li­gence back to com­man­ders.”

It has been de­scribed by the MoD as a “fully au­ton­o­mous” air­craft. Lord Drayson, a for­mer min­is­ter for de­fence pro­cure­ment, has said it would have “al­most no need for op­er­a­tor in­put”.

A spokesman for the MoD said: “There is no in­tent within the MOD to de­velop weapon sys­tems that op­er­ate en­tirely with­out hu­man in­put. Our weapons will al­ways be un­der hu­man con­trol as an ab­so­lute guar­an­tee of over­sight, author­ity and ac­count­abil­ity.

An un­manned aerial drone used by the US mil­i­tary.

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.