THE TROU­BLE WITH TRY­ING TO BAN RO­BOTIC WEAPONS

African Independent - - NEWS - PAUL SCHARRE

Last month more than 100 ro­bot­ics and ar­ti­fi­cial in­tel­li­gence (AI) com­pany chiefs signed an open let­ter to the UN warn­ing of the dan­gers of au­tonomous weapons.

For the past three years, coun­tries have gath­ered at the UN in Geneva un­der the aus­pices of the Con­ven­tion on Cer­tain Con­ven­tional Weapons to dis­cuss the role of au­to­ma­tion and hu­man de­ci­sion-mak­ing in fu­ture weapons. The cen­tral ques­tion for nations is whether the de­ci­sion to kill in war should be del­e­gated to ma­chines.

As in many other fields, weapons in­volve in­creas­ing amounts of au­to­ma­tion and au­ton­omy.

More than 30 nations have or are de­vel­op­ing armed drones, but these are largely con­trolled re­motely. Some drones have the abil­ity to take off and land au­tonomously or fly pre­pro­grammed routes, but any en­gage­ment of their weapons is con­trolled by peo­ple. Ad­vances in AI and ob­ject recog­ni­tion raise the spec­tre of fu­ture weapons that could search for, iden­tify and de­cide to en­gage tar­gets on their own.

Au­tonomous weapons that could hunt their own tar­gets would be the next step in a decades-long trend to­wards greater au­to­ma­tion in weapon sys­tems. Since World War II, nations have em­ployed “fire-and-for­get” hom­ing mu­ni­tions such as tor­pe­does and mis­siles that can­not be re­called once launched.

Hom­ing mu­ni­tions have on-board seek­ers to sense en­emy tar­gets and can ma­noeu­vre to cor­rect for aim­ing er­rors, and zero in on mov­ing tar­gets.

Un­like au­tonomous weapons, they do not de­cide which tar­gets to en­gage. The hu­man de­cides to de­stroy the tar­get and the hom­ing mu­ni­tion merely car­ries out the ac­tion. Some weapons also use au­to­ma­tion to aid hu­mans in mak­ing the de­ci­sion of whether or not to fire. To­day, radars use au­to­ma­tion to help clas­sify ob­jects, but hu­mans still make the de­ci­sion to fire – most of the time.

More than 30 nations em­ploy hu­man-su­per­vised, au­tonomous weapons to de­fend ships, ve­hi­cles, and land bases from at­tack. This means hu­mans in­ter­vene if some­thing goes awry, but once the weapon is ac­ti­vated, it can search for, de­cide on, and en­gage tar­gets on its own.

Ad­vances in ro­bot­ics and au­ton­omy raise the prospect of fu­ture of­fen­sive weapons that could hunt for and en­gage tar­gets on their own. A num­ber of ma­jor mil­i­tary pow­ers are de­vel­op­ing stealth com­bat drones to pen­e­trate en­emy airspaces.

They will need the abil­ity to op­er­ate au­tonomously deep in­side en­emy lines with lim­ited or no com­mu­ni­ca­tions links with hu­man con­trollers.

What would the con­se­quences be of del­e­gat­ing the au­thor­ity to weapons to of­fen­sively search for, de­cide on, and en­gage tar­gets with­out hu­man su­per­vi­sion? We don’t know. It’s pos­si­ble they would work fine. It’s also pos­si­ble they would mal­func­tion and de­stroy the wrong tar­gets. With no hu­man su­per­vis­ing, they might even con­tinue at­tack­ing the wrong tar­gets un­til they ran out of am­mu­ni­tion.

In the worst cases, fleets of au­tonomous weapons might be ma­nip­u­lated, spoofed or hacked by ad­ver­saries into at­tack­ing the wrong tar­gets and per­haps even friendly forces.

A grow­ing num­ber of voices are rais­ing the alarm about the po­ten­tial con­se­quences of au­tonomous weapons. While no coun­try has stated they in­tend to de­velop any, few have re­nounced them. Most ma­jor mil­i­tary pow­ers are leav­ing the door open to their de­vel­op­ment, even if they say they have no plans to do so to­day.

In re­sponse to this, more than 60 NGOs have called for an in­ter­na­tional treaty ban­ning au­tonomous weapons be­fore they are de­vel­oped.

Two years ago, more than 3 000 ro­bot­ics and AI re­searchers signed an open let­ter sim­i­larly call­ing for a ban, al­beit with a slightly more nu­anced po­si­tion.

Rather than a blanket pro­hi­bi­tion, they pro­posed to only ban “of­fen­sive au­tonomous weapons be­yond mean­ing­ful hu­man con­trol” (terms which were not de­fined).

One of big­gest chal­lenges in grap­pling with au­tonomous weapons is defin­ing ter­mi­nol­ogy. The con­cept seems sim­ple enough. Does the hu­man de­cide whom to kill or does the ma­chine make its own de­ci­sion? In prac­tice, greater au­to­ma­tion has been slowly creep­ing into weapons with each suc­ces­sive gen­er­a­tion for the past 70 years.

As with cars, where au­to­ma­tion is in­cre­men­tally tak­ing over tasks such as emer­gency brak­ing, lane keep­ing, and park­ing, what might seem like a bright line from a dis­tance can be fuzzy up close. Where is this creep­ing au­ton­omy tak­ing us? It could be to a place where hu­mans are fur­ther and fur­ther re­moved from the bat­tle­field, a place where killing is even more im­per­sonal and me­chan­i­cal than be­fore – is that good or bad? It is also pos­si­ble that fu­ture ma­chines could make bet­ter tar­get­ing de­ci­sions than hu­mans, spar­ing civil­ian lives and re­duc­ing col­lat­eral dam­age.

If self-driv­ing cars could po­ten­tially re­duce ve­hic­u­lar deaths, per­haps self-tar­get­ing weapons could re­duce un­nec­es­sary killing in war.

Much of the de­bate around au­tonomous weapons re­volves around their hy­poth­e­sised ac­cu­racy and reli­a­bil­ity.

Pro­po­nents of a ban ar­gue that such weapons would be prone to ac­ci­den­tally tar­get­ing civil­ians. Op­po­nents of a ban say that might be true to­day but the tech­nol­ogy will get bet­ter and may some­day be bet­ter than hu­mans.

These are im­por­tant ques­tions, but know­ing their an­swers is not enough.

Tech­nol­ogy is bring­ing us to a fun­da­men­tal cross­roads in hu­man­ity’s re­la­tion­ship with war.

It will be­come in­creas­ingly pos­si­ble to de­ploy weapons on the bat­tle­field that can search for, de­cide to en­gage, and en­gage tar­gets on their own. If we had all of the tech­nol­ogy we could imag­ine, what role would we want hu­mans to play in lethal de­ci­sion-mak­ing in war?

To an­swer this, we need to get be­yond overly broad con­cepts like whether or not there is a hu­man “in the loop”. Just as driv­ing is be­com­ing a blend of hu­man con­trol and au­to­ma­tion, de­ci­sions sur­round­ing weapons en­gage­ment al­ready in­cor­po­rate au­to­ma­tion and hu­man de­ci­sion-mak­ing.

The In­ter­na­tional Com­mit­tee of the Red Cross has pro­posed ex­plor­ing the “crit­i­cal func­tions” re­lated to en­gage­ments in weapon sys­tems. Such an ap­proach could help to un­der­stand where hu­man con­trol is needed and for which tasks au­to­ma­tion may be valu­able.

Some de­ci­sions in war have fac­tu­ally cor­rect an­swers: “Is this per­son hold­ing a ri­fle or a rake?” It is pos­si­ble to imag­ine ma­chines that could an­swer that ques­tion. Ma­chines al­ready out­per­form hu­mans in some bench­mark tests of ob­ject recog­ni­tion, al­though they also have sig­nif­i­cant vul­ner­a­bil­i­ties to spoof­ing at­tacks.

Some de­ci­sions in war re­quire judge­ment – a qual­ity dif­fi­cult to pro­gram into ma­chines.

The laws of war re­quire that any col­lat­eral dam­age from at­tack­ing a tar­get can­not be dis­pro­por­tion­ate to the mil­i­tary ad­van­tage.

But de­cid­ing what num­ber of civil­ian deaths is “pro­por­tion­ate” is a judge­ment call.

It’s pos­si­ble that some­day ma­chines may be able to make these judge­ments if we can an­tic­i­pate the spe­cific cir­cum­stances, but the cur­rent state of AI means it will be dif­fi­cult for ma­chines to con­sider the broader con­text for their ac­tions. Even if fu­ture ma­chines can make these judge­ments, we must ask: Are there some de­ci­sions we want hu­mans to make in war, not be­cause ma­chines can’t, but be­cause they ought not to? If so, why?

Paul Scharre is se­nior fel­low at the Cen­tre for New Amer­i­can Se­cu­rity.

PIC­TURE: REUTERS

JUDGE­MENT CALL: Weapons are in­creas­ingly au­to­mated, rais­ing new eth­i­cal ques­tions.

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.