I'm a paci­fist, so why don't I sup­port the Cam­paign to Stop Killer Robots?

The Guardian Australia - - Environment / Science - Sub­barao Kamb­ham­pati

The Cam­paign to Stop Killer Robots has called on the UN to ban the devel­op­ment and use of au­ton­o­mous weapons: those that can iden­tify, track and at­tack tar­gets with­out mean­ing­ful hu­man over­sight. On Mon­day, the group re­leased a sen­sa­tion­al­ist video, sup­ported by some prom­i­nent ar­ti­fi­cial in­tel­li­gence re­searchers, de­pict­ing a dystopian fu­ture in which such ma­chines run wild.

I am grat­i­fied that my col­leagues are vol­un­teer­ing their ef­forts to en­sure ben­e­fi­cial uses of ar­ti­fi­cial in­tel­li­gence (AI) tech­nol­ogy. But I am un­con­vinced of the ef­fec­tive­ness of the cam­paign be­yond a sym­bolic ges­ture. Even though I iden­tify my­self strongly as a paci­fist, I have

reser­va­tions about sign­ing up to the pro­posed ban. I am not alone in this predica­ment.

Apart from the dif­fi­culty of pin­ning down ex­actly what the ban en­tails for states that want to fol­low it – is the ban against au­ton­omy or in­tel­li­gence? – I won­der about the ban’s abil­ity to de­ter mis­use by rogue state or non-state ac­tors. To the ex­tent that bans on con­ven­tional and nu­clear weapons have been ef­fec­tive, it is be­cause of the sig­nif­i­cant nat­u­ral bar­ri­ers to en­try: the raw ma­te­ri­als and equip­ment needed to make those weapons are hard to ob­tain, and re­spon­si­ble states can con­trol them to a sig­nif­i­cant ex­tent by fiat and sanc­tions. In con­trast, AI tech­nol­ogy, which os­ten­si­bly en­ables the kind of weapons that this ban is aimed at, is al­ready quite open, and some may ar­gue, ad­mirably so. Mis­uses of it can thus be as hard to con­trol by fiat and bans – as with cy­ber war­fare, for ex­am­ple.

Con­sider the hy­po­thet­i­cal “killer drones” de­picted in the video ac­com­pa­ny­ing the Guardian’s ar­ti­cle on the call for the ban. Even to­day, the face recog­ni­tion tech­nol­ogy sup­pos­edly needed by such drones can be eas­ily con­structed by any­one with ac­cess to the in­ter­net: sev­eral near-state-of-the-art “pre­trained net­works” are avail­able open source. Things will only be­come eas­ier as we make fur­ther tech­ni­cal ad­vances.

Given these sig­nif­i­cantly lower bar­ri­ers to en­try, even if the UN and some con­stituent states agreed to a ban, it is far from clear that it would stop other rogue state and non-state ac­tors from procur­ing and de­ploy­ing such tech­nol­ogy for ma­li­cious pur­poses. This would ren­der such bans at best a pyrrhic vic­tory for the pro­po­nents of peace, and at worst en­tail the ironic and un­in­tended ef­fect of ty­ing the hands of the “good ac­tors” be­hind their backs, while do­ing lit­tle to stop the bad ones.

As an AI re­searcher, I am also dis­turbed by the sen­sa­tion­al­i­sa­tion of the whole is­sue through dystopian – if high pro­duc­tion value – videos such as the one re­ported in the Guardian ar­ti­cle. Us­ing a “Daisy Girl”style cam­paign ads de­signed to stoke pub­lic fears about AI tech­nolo­gies seems to me to be more an ex­er­cise at in­flam­ing rather than in­form­ing pub­lic opin­ion.

Given these con­cerns about the ef­fec­tive­ness of blan­ket bans, I be­lieve that AI re­searchers should in­stead be think­ing of more proac­tive tech­ni­cal so­lu­tions to ame­lio­rate po­ten­tial mis­uses of AI tech­nolo­gies. As one small ex­am­ple of this al­ter­nate strat­egy, we held a work­shop at Ari­zona State Univer­sity in early March 2017 ti­tled Chal­lenges of AI: En­vi­sion­ing and Ad­dress­ing Ad­verse Out­comes. The work­shop was at­tended by many lead­ing sci­en­tists, tech­nol­o­gists and ethi­cists, and had the aim of com­ing up with de­fen­sive re­sponses for a va­ri­ety of po­ten­tial mis­uses of AI tech­nol­ogy, in­clud­ing lethal au­ton­o­mous weapons.

One re­cur­rent theme of the work­shop was us­ing AI tech­nol­ogy it­self as a de­fence against the ad­verse/ ma­li­cious uses of AI. This could in­clude re­search into so-called “guardian AI sys­tems” that can pro­vide mon­i­tor­ing and de­fen­sive re­sponses. Even if such ef­forts don’t suc­ceed in com­pletely con­tain­ing the ad­verse ef­fects, they could at least bet­ter in­form the pub­lic pol­icy on these is­sues.

To re­it­er­ate, I con­sider my­self a paci­fist, and have al­ways sup­ported ef­forts to con­trol arms and curb wars. If I be­lieved that the pro­posed ban would be ef­fec­tive and not merely sym­bolic, and that this cam­paign would in­form rather than in­flame the pub­lic, I would have gladly sup­ported it.

Dis­claimer: In the in­ter­ests of full dis­clo­sure, let me state that some of my ba­sic re­search (on hu­man-aware AI) is sup­ported by US De­part­ment of De­fense fund­ing agen­cies (eg Of­fice of Naval Re­search). How­ever, my re­search fund­ing sources have no im­pact on my per­sonal views, and de­fence fund­ing agen­cies in the US sup­port a wide spec­trum of ba­sic re­search, in­clud­ing that by re­searchers in­volved in the ban cam­paign.

Sub­barao Kamb­ham­pati is a pro­fes­sor of com­puter science at Ari­zona State Univer­sity, and the pres­i­dent of the As­so­ci­a­tion for the Ad­vance­ment of Ar­ti­fi­cial In­tel­li­gence.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.