Lethal au­thor­ity will be the next step in robotic evo­lu­tion

It will pro­lif­er­ate in mil­i­tary in­tel­li­gence anal­y­sis, de­ci­sion-mak­ing and weapon sys­tems

The Star Early Edition - - FOCUS - LUKE SHAR­RETT

IN THE PAST few weeks I have been writ­ing about ar­ti­fi­cial in­tel­li­gence (AI) and how it is re­shap­ing our world, in par­tic­u­lar health­care, medicine, ma­te­rial science, art, robotics and nano-tech­nol­ogy.

How­ever, just as AI and au­tonomous sys­tems de­vel­op­ment have pro­lif­er­ated in ev­ery day life, so too will it pro­lif­er­ate in mil­i­tary in­tel­li­gence anal­y­sis, de­ci­sion-mak­ing, and au­tonomous weapon sys­tems.

It is quite prob­a­ble that in fu­ture the coun­try with the most in­tel­li­gent ma­chines will be the world leader or dom­i­na­tor. Rus­sian Pres­i­dent Vladimir Putin re­cently called AI the fu­ture of all mankind and stated that the coun­try lead­ing in ar­ti­fi­cial in­tel­li­gence would rule the world. No won­der the US, China and Rus­sia are heav­ily in­vest­ing in the mil­i­tary use of AI.

It was dur­ing the Per­sian Gulf War of 1990-1991 that we first be­came aware of the de­struc­tive power and ef­fi­cacy of AI weapons. Although pre­ci­sion-guided mu­ni­tions or “smart bombs” amounted to only 7.4 per­cent of all bombs dropped on Iraq, they were hugely suc­cess­ful with a min­i­mum of col­lat­eral ca­su­al­ties.

Ac­cel­er­ated by the tragic events of 9/11, the arms race for AI and au­tonomous weapons has re­ally taken off. In ad­di­tion to un­manned drones and ground ve­hi­cles, we have re­cently seen the emer­gence of ar­ti­fi­cially in­tel­li­gent and au­tonomous weapons with per­cep­tion and de­ci­sion-mak­ing ca­pa­bil­i­ties.

Cur­rently about 90 states and non­state groups pos­sess drones with vary­ing de­grees of au­ton­omy, and 30 more have armed drones or pro­grammes to de­velop them. Even the Is­lamic State is at­tach­ing bombs to small drones.

Since au­tonomous war ma­chines are not lim­ited by the phys­i­o­log­i­cal lim­its of hu­mans, these ma­chines are much smaller, lighter, faster, and more ma­noeu­vrable. Their en­durance is much higher and they could stay on the bat­tle­field longer and with­out rest. They can take more risk and could there­fore per­form dan­ger­ous or even sui­ci­dal missions with­out risk­ing hu­man lives.

In­tel­li­gent ma­chines can also han­dle mul­ti­ple threats in a com­plex com­bat sit­u­a­tion that is too fast for hu­man de­ci­sion-mak­ing.

But it re­ally is the next step in robotic evo­lu­tion that is of in­ter­est to the mil­i­tary – full au­ton­omy. This en­tails mov­ing from pas­sive ob­ser­va­tion of enemy ter­ri­tory to dis­cov­er­ing and elim­i­nat­ing the enemy. Of course, this means the del­e­ga­tion of “lethal au­thor­ity.“

Quite a few such sys­tems al­ready ex­ist, such as the Is­raeli Harpy Drone, de­signed to linger over enemy ter­ri­tory and de­stroy any enemy radar it dis­cov­ers. The US Navy de­vel­oped the so-called “fire and for­get” mis­sile, which could be fired from a ship to a re­mote area where it au­to­mat­i­cally seeks and de­stroys an enemy ves­sel.

Cur­rently, the US Depart­ment of De­fence is de­vel­op­ing a ma­chine vi­sion sys­tem based on AI tech­nol­ogy known as Project Maven, which analy­ses mas­sive amounts of data and video cap­tured by mil­i­tary drones in or­der to de­tect ob­jects, track their mo­tion and alert hu­man an­a­lysts of pat­terns and ab­nor­mal or sus­pi­cious ac­tiv­ity.

The data is also used to im­prove the tar­get­ing of au­tonomous drone strikes us­ing fa­cial and ob­ject recog­ni­tion.

Ahead of other coun­tries, the US is ex­per­i­ment­ing with au­tonomous boats that track sub­marines for very long dis­tances, while China re­searches the use of “swarm in­tel­li­gence” to en­able teams of drones to hunt and de­stroy as a sin­gle unit. Rus­sia is work­ing on an un­der­wa­ter drone that could de­liver pow­er­ful nu­clear war­heads to de­stroy en­tire cities.

Ma­chine learn­ing and drone swarms are all po­ten­tial tech­nolo­gies that could change the fu­ture of war. Some of these AI tech­nolo­gies are al­ready be­ing used on the bat­tle­field. For ex­am­ple, Rus­sia main­tained that a drone swarm at­tacked one of its bases in Syria.

How­ever, ac­cord­ing to Manuel DeLanda the ad­vent of in­tel­li­gent and au­tonomous bombs, mis­siles and drones equipped with ar­ti­fi­cial ob­ser­va­tion and de­ci­sion-mak­ing ca­pa­bil­i­ties is part of a much larger trans­fer of cog­ni­tive struc­tures from hu­mans to ma­chines.

If we think about it, Mi­crosoft’s Cor­tana, Google’s Now, Sam­sung’s Bixby, Ap­ple’s Siri and Net­flix’s stream­ing al­go­rithms all com­bine user in­put, ac­cess to big data, and be­spoke al­go­rithms to pro­vide de­ci­sion sup­port to hu­man users based on their de­ci­sion his­tory and the de­ci­sions of mil­lions of other users. This is noth­ing more than a slow and steady shift of au­thor­ity from hu­mans to al­go­rithms. Sim­i­larly, the de­ci­sion-mak­ing in war is be­ing trans­ferred to in­tel­li­gent ma­chines – un­for­tu­nately not with­out risk.

On March 29, 2018, five mem­bers of the Al Man­thari fam­ily were trav­el­ling with a Toy­ota Landcruiser in Ye­men to the city of al-Sawma’ah to pick up a lo­cal el­der. At about 2pm a rocket from a US Preda­tor drone with ob­ject and fa­cial recog­ni­tion hit the ve­hi­cle, killing three of its pas­sen­gers, while a fourth died later. The US took re­spon­si­bil­ity for the strike, claim­ing the vic­tims were ter­ror­ists. Yet Ye­me­nis who knew the fam­ily claim they were civil­ians.

Could it be that the Al Man­thari fam­ily were killed due to in­cor­rect meta­data, which are used to se­lect drone tar­gets? Meta­data is of­ten har­vested from mo­bile phones – con­ver­sa­tions, text mes­sages, email, web brows­ing be­hav­iour, lo­ca­tion, and pat­terns of be­hav­iour. At­tacks are based on spe­cific pre­de­ter­mined cri­te­ria, for ex­am­ple a con­nec­tion to a sus­pected ter­ror­ist phone num­ber or a sus­pected ter­ror­ist lo­ca­tion. But some­times this in­tel­li­gence can be wrong!

AI weapons may have many ben­e­fits, but they carry sig­nif­i­cant eth­i­cal con­cerns, in par­tic­u­lar that a com­puter al­go­rithm could both se­lect and elim­i­nate hu­man tar­gets with­out any hu­man in­volve­ment.

How­ever, based on his­tory, I do not doubt that al­go­rithms will run fu­ture wars and that “killer robots” will be used. Per­haps the sug­ges­tion of Ron­ald Arkin should be fol­lowed to fit all au­tonomous weapons with ”eth­i­cal gov­er­nors.”

One thing is for sure in the war of al­go­rithms – dom­i­na­tion will de­pend on the qual­ity of each side’s AI and al­go­rithms. Pro­fes­sor Louis Fourie is the deputy vicechan­cel­lor: Knowl­edge and In­for­ma­tion Tech­nol­ogy – Cape Penin­sula Univer­sity of Tech­nol­ogy.

AN UN­MANNED aerial ve­hi­cle (UAV) hangs on dis­play above the Tex­tron booth dur­ing the Spe­cial Op­er­a­tions Forces In­dus­try Con­fer­ence in Flor­ida. Ar­ti­fi­cial In­tel­li­gence weapons may have many ben­e­fits, but they carry sig­nif­i­cant eth­i­cal con­cerns. I Bloomberg

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.