Ar­ti­fi­cial In­tel­li­gence Ex­perts Pledge to Not Help Build Ter­mi­na­tors

Trillions - - Contents -

On July 18, in­di­vid­u­als, com­pa­nies and other or­ga­ni­za­tions banded to­gether in a rare sign of in­ter­na­tional unity to sign a joint pledge say­ing they would “nei­ther par­tic­i­pate in nor sup­port the de­vel­op­ment, man­u­fac­ture, trade or use of lethal au­ton­o­mous weapons”. The pledge also said the groups would de­mand of gov­ern­ments to “cre­ate a fu­ture with strong in­ter­na­tional norms, reg­u­la­tions and laws” to pro­tect against what in old sci­ence fic­tion movies was once called “the rise of the ma­chines”.

The agree­ment was de­vel­oped by the Fu­ture of Life In­sti­tute (FLI) and reads in full as fol­lows:

Lethal Au­ton­o­mous Weapons Pledge

Ar­ti­fi­cial in­tel­li­gence (AI) is poised to play an in­creas­ing role in mil­i­tary sys­tems. There is an ur­gent op­por­tu­nity and ne­ces­sity for cit­i­zens, pol­i­cy­mak­ers, and lead­ers to dis­tin­guish be­tween ac­cept­able and un­ac­cept­able uses of AI.

In this light, we the un­der­signed agree that the de­ci­sion to take a hu­man life should never be del­e­gated to a ma­chine. There is a moral com­po­nent to this po­si­tion, that we should not al­low ma­chines to make life-tak­ing de­ci­sions for which oth­ers – or no­body – will be cul­pa­ble. There is also a pow­er­ful prag­matic ar­gu­ment: lethal au­ton­o­mous weapons, se­lect­ing and en­gag­ing tar­gets with­out hu­man in­ter­ven­tion, would be dan­ger­ously desta­bi­liz­ing for ev­ery coun­try and in­di­vid­ual. Thou­sands of AI re­searchers agree that by re­mov­ing the risk, at­tributabil­ity, and dif­fi­culty of tak­ing hu­man lives, lethal au­ton­o­mous weapons could be­come pow­er­ful in­stru­ments of vi­o­lence and op­pres­sion, es­pe­cially when linked to sur­veil­lance and data sys­tems. More­over, lethal au­ton­o­mous weapons have char­ac­ter­is­tics quite dif­fer­ent from nu­clear, chem­i­cal and bi­o­log­i­cal weapons, and the uni­lat­eral ac­tions of a sin­gle group could too eas­ily spark an arms race that the in­ter­na­tional com­mu­nity lacks the tech­ni­cal tools and global gov­er­nance sys­tems to man­age. Stig­ma­tiz­ing and pre­vent­ing such an arms race should be a high pri­or­ity for na­tional and global se­cu­rity.

We, the un­der­signed, call upon gov­ern­ments and gov­ern­ment lead­ers to cre­ate a fu­ture with strong in­ter­na­tional norms, reg­u­la­tions and laws against lethal au­ton­o­mous weapons. These cur­rently be­ing ab­sent, we opt to hold our­selves to a high stan­dard: we will nei­ther par­tic­i­pate in nor sup­port the de­vel­op­ment, man­u­fac­ture, trade, or use of lethal au­ton­o­mous weapons. We ask that tech­nol­ogy com­pa­nies and or­ga­ni­za­tions, as well as lead­ers, pol­i­cy­mak­ers, and other in­di­vid­u­als, join us in this pledge.

The pledge has been signed by over 160 com­pa­nies, 2,400 in­di­vid­u­als from 90 coun­tries and groups from three dozen coun­tries. The Fu­ture of Life In­sti­tute (FLI) also noted in its pub­lish­ing of the pledge that separately from this pledge 26 coun­tries in the United Na­tions have “ex­plic­itly en­dorsed the call for a ban on lethal au­ton­o­mous weapon sys­tems”. These in­clude: Al­ge­ria, Ar­gentina, Aus­tria, Bo­livia, Brazil, Chile, China, Colom­bia, Costa Rica, Cuba, Dji­bouti, Ecuador, Egypt, Ghana, Gu­atemala, Holy See, Iraq, Mex­ico, Nicaragua, Pak­istan, Panama, Peru, State of Pales­tine, Uganda, Venezuela, Zim­babwe.

When the pledge was un­veiled on July 18 at the an­nual Joint Con­fer­ence on Ar­ti­fi­cial In­tel­li­gence (IJCAI) in Stock­holm, Swe­den, FLI Pres­i­dent and MIT Pro­fes­sor Max Teg­mark, said, “I’m ex­cited to see AI lead­ers shift­ing from talk to ac­tion, im­ple­ment­ing a pol­icy that politi­cians have thus far failed to put into ef­fect. AI has huge po­ten­tial to help the world—if we stig­ma­tize and pre­vent its abuse. AI weapons that au­tonomously de­cide to kill peo­ple are as dis­gust­ing and desta­bi­liz­ing as bioweapons and should be dealt with in the same way.”

One of the pledge sig­na­to­ries, An­thony Aguirre, a pro­fes­sor at the Univer­sity of Cal­i­for­nia –Santa Cruz said in a state­ment to CNN about the topic that, “We would re­ally like to en­sure that the over­all im­pact of the tech­nol­ogy is pos­i­tive and not lead­ing to a ter­ri­ble arms race, or a dystopian fu­ture with ro­bots fly­ing around killing ev­ery­body.” Part of the power of the pledge is in what it says about the sign­ers, of course. An­other part of it is in how it can af­fect public opin­ion of those who have not signed but might still be im­pacted by it. As sig­na­tory and AI ex­pert Yoshua Ben­gio of the Mon­treal In­sti­tute for Learn­ing Al­go­rithms, said, this kind of pledge can act as a sort of public sham­ing mech­a­nism for those who have not yet signed up. In an in­ter­view with The Guardian, he said that, “This ap­proach ac­tu­ally worked for land mines, thanks to in­ter­na­tional treaties and public sham­ing, even though ma­jor coun­tries like the U.S. did not sign the treaty ban­ning land mines.”

Whether this agree­ment will have the same ef­fect is un­for­tu­nately ques­tion­able. For one thing, land mines have lit­tle value other than to blow peo­ple up. They are there­fore by def­i­ni­tion “bad things” for which pledges and public sham­ing tend to work well. For Ar­ti­fi­cial In­tel­li­gence (AI), the prob­lem is very dif­fer­ent. Much of the tech­nol­ogy de­vel­op­ment in­her­ent in AI, whether it be code, elec­tron­ics, sen­sor mech­a­nisms, adap­tive ma­chin­ery and more, has the abil­ity of be­ing used as much for good as for any­thing else. That means AI will con­tinue to chug along de­vel­op­ing sub­sys­tems to do many things which could even­tu­ally be­come part of “killer ro­bots”, even if the com­pa­nies in­volved might ar­gue they won’t do any­thing to sup­port mak­ing such things.

A sec­ond is­sue is that sign­ing this is a lit­tle like shut­ting the barn doors to keep the horses which al­ready left from es­cap­ing. In truth, the U.S. is al­ready well down the path de­vel­op­ing these kinds of weapons, since AI­like tech­nol­ogy is al­ready em­bed­ded in ev­ery­thing from ma­chine sys­tems con­trols to tar­get­ing tech­nolo­gies.

Deep black Amer­i­can mil­i­tary tech­nol­ogy is of­ten decades ahead of pub­licly known tech­nol­ogy. DARPA, the De­fense Ad­vanced Re­search Projects Agency, has a vast bud­get to cre­ate fu­ture weapons. Its se­cret deep black ver­sion has an even greater bud­get to make weapons of the fu­ture a re­al­ity to­day.

Other coun­tries, like China and Rus­sia, are also al­ready well on their way with weapons of these kinds. Sign­ing a pledge won’t make them “undo” their work.

Fi­nally, un­like with nu­clear weapons, many of the mil­i­tary ap­pli­ca­tions of AI will be pro­moted as “de­fense” not “of­fense”, which is what the pledge seems to im­ply are the bad things. Us­ing AI for “de­fense” sounds like it would be okay un­der the pledge–even if “Killer Ro­bots for Peace” doesn’t ex­actly have a pos­i­tive ring to it.

Still, the sign­ing of this kind of agree­ment raises the level of aware­ness of what AI must not be­come, which is yet an­other way to wage war on our fel­low hu­mans. Though it will be hard to keep it from be­ing part of that, per­haps–with aware­ness and in­tent–the tech­nol­ogy may not find its way into the ugli­est of weapons for at least some time into the fu­ture.

The next step for those op­posed to the use of AI in au­ton­o­mous weapons sys­tems would be to pledge to work to­gether to de­velop the means to de­tect and de­feat such tech­nol­ogy.

With the fu­sion of elec­tronic and ge­netic tech­nol­ogy we may not be able to even rec­og­nize what is AI, what is ge­net­i­cally en­hanced hu­man and what is par­tially hu­man but mostly cy­borg. Re­ports by reg­u­lar hu­man sol­diers of hy­per-lethal and bullet-proof su­per sol­diers op­er­at­ing in Iraq sug­gest that such weapons al­ready ex­ist. The Iraqi peo­ple were de­fense­less against such weapons. What will we do when they are un­leashed on us?

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.