LET’S BAN KILLER ROBOTS

A call from re­searchers and in­dus­try

Ottawa Citizen - - FRONT PAGE - Ian Kerr holds the Canada Re­search Chair in Ethics, Law and Tech­nol­ogy at the Univer­sity of Ot­tawa, where he teaches a course called The Laws of Ro­bot­ics and is co-au­thor of the forth­com­ing book Ro­bot Law, which will be pub­lished by Ed­ward El­gar in De­cemb

In­ter­net pi­o­neer Stewart Brand fa­mously said: “Once a new tech­nol­ogy rolls over you, if you’re not part of the steam­roller, you’re part of the road.”

This un­seemly prospect is ex­tremely pow­er­ful, im­bu­ing in many the de­sire to build even big­ger and bet­ter steam­rollers.

Be­cause, ob­vi­ously, who­ever builds the big­gest steam­roller wins. Right? Wrong. This men­tal­ity and the ex­is­ten­tial risks that emerg­ing tech­nolo­gies im­pose are pre­cisely what more than 16,000 AI re­searchers, roboti­cists and oth­ers in re­lated fields are now seek­ing to avoid. Like the many chemists and bi­ol­o­gists who pro­vided broad sup­port for the pro­hi­bi­tion of chem­i­cal and bi­o­log­i­cal weapons, these AI re­searchers and roboti­cists don’t want to see any­body steam­rollered by killer robots. That’s right. Killer robots. Killer robots are of­fen­sive au­ton­o­mous weapons that can se­lect and en­gage tar­gets with­out any need for hu­man in­ter­ven­tion. In an open let­ter re­cently pre­sented at the In­ter­na­tional Joint Con­fer­ence on Ar­ti­fi­cial In­tel­li­gence in Buenos Aires, ex­perts de­scribe the prospect of killer robots as “the third revo­lu­tion in war­fare, af­ter gun­pow­der and nu­clear arms.” The let­ter calls for “a ban on of­fen­sive au­ton­o­mous weapons” that can be en­gaged with­out mean­ing­ful or ef­fec­tive hu­man con­trol.

The list of sig­na­to­ries call­ing for an of­fen­sive ban on killer robots is im­pres­sive. Any­one who con­sumes pop­u­lar media surely knows by now that it in­cludes the likes of Tesla and SpaceX CEO Elon Musk, Ap­ple co-founder Steve Woz­niak, Skype co-founder Jaan Tallinn, physi­cist Stephen Hawk­ing, and nu­mer­ous highly in­flu­en­tial aca­demics such as Noam Chom­sky and Daniel Den­nett.

Un­sur­pris­ingly, the pop­u­lar press has ig­nored a num­ber of no­table fe­male sig­na­to­ries wor­thy of ex­plicit men­tion (hat tip to Mary Ware­ham): Higgins Pro­fes­sor of Nat­u­ral Sciences Bar­bara Grosz of Har­vard Univer­sity, IBM Wat­son de­sign leader Kathryn McElroy, Martha E. Pollack of the Univer­sity of Michigan, Carme Tor­ras of the Ro­bot­ics In­sti­tute at CSIC-UPC in Barcelona, Francesca Rossi of Padova Univer­sity and Har­vard Univer­sity, Sheila McIl­raith of the Univer­sity of Toronto, Al­li­son Oka­mura of Stan­ford Univer­sity, Lucy Suchman of Lan­caster Univer­sity, Bon­nie We­ber of Ed­in­burgh Univer­sity, Mary-Anne Wil­liams of the Univer­sity of Tech­nol­ogy Syd­ney, and Heather Roff of the Univer­sity of Den­ver, to name a few.

I, too, am a sig­na­tory. I am a Cana­dian par­tic­i­pant in the global Cam­paign To Stop Killer Robots (co-or­di­nated by Hu­man Rights Watch in col­lab­o­ra­tion with eight other na­tional and in­ter­na­tional NGOs). I am also a mem­ber of the In­ter­na­tional Com­mit­tee for Ro­bot Arms Con­trol (an NGO com­mit­ted to the peace­ful use of ro­bot­ics in the ser­vice of hu­man­ity).

As a tech­no­log­i­cal con­cept, the killer ro­bot rep­re­sents a stark shift in mil­i­tary pol­icy; a wil­ful, in­ten­tional and un­prece­dented re­moval of hu­mans from the kill de­ci­sion loop. Just set the robots loose and let them do our dirty work. For this rea­son and oth­ers, the United Na­tions has ded­i­cated a se­ries of meet­ings through its Con­ven­tion on Con­ven­tional Weapons, hop­ing to bet­ter un­der­stand killer robots and their so­cial im­pli­ca­tions.

To date, the de­bate has mostly fo­cused on three is­sues: How far off are we from de­vel­op­ing ad­vanced au­ton­o­mous weapons? Could such tech­nolo­gies be made to com­port with in­ter­na­tional hu­man­i­tar­ian law? Could a ban be ef­fec­tive if some na­tions do not com­ply?

On the first is­sue, the open let­ter re­veals the stun­ning fact that many tech­nol­o­gists be­lieve the ro­bot revo­lu­tion is “fea­si­ble within years, not decades, and the stakes are high.”

Of course, this is largely spec­u­la­tive and the ac­tual timeline is surely longer once one lay­ers on top of the tech­nol­ogy the re­quire­ments of the sec­ond is­sue, that killer robots must com­port with in­ter­na­tional hu­man­i­tar­ian law. That is, ma­chine sys­tems op­er­at­ing with­out hu­man in­ter­ven­tion must be able to: suc­cess­fully dis­crim­i­nate be­tween com­bat­ants and non-com­bat­ants in the mo­ment of con­flict; morally as­sess ev­ery pos­si­ble con­flict in or­der to jus­tify whether a par­tic­u­lar use of force is pro­por­tional; and com­pre­hend and as­sess mil­i­tary oper­a­tions suf­fi­ciently well to be able to de­cide whether the use of force on a par­tic­u­lar oc­ca­sion is of mil­i­tary ne­ces­sity.

To date, there is no ob­vi­ous so­lu­tion to these non-triv­ial tech­no­log­i­cal chal­lenges.

How­ever, in my view, it is the stance taken on the third is­sue — whether it would be ef­fi­ca­cious to ban killer robots in any event — that makes this open let­ter pro­found. This is what made me want to sign the let­ter.

Although en­gaged cit­i­zens sign pe­ti­tions ev­ery­day, it is not of­ten that cap­tains of in­dus­try, sci­en­tists and tech­nol­o­gists call for pro­hi­bi­tions on in­no­va­tion of any sort — let alone an out­right ban. The ban is an im­por­tant sig­ni­fier. Even if it is self­serv­ing in­so­far as it seeks to avoid “cre­at­ing a ma­jor public back­lash against AI that cur­tails its fu­ture so­ci­etal ben­e­fits,” by rec­og­niz­ing that start­ing a mil­i­tary AI arms race is a bad idea, the let­ter qui­etly re­frames the pol­icy ques­tion of whether to ban killer robots on grounds of moral­ity rather than ef­fi­cacy. This is cru­cial, as it pro­vokes a fun­da­men­tal recon­cep­tu­al­iza­tion of the many strate­gic ar­gu­ments that have been made for and against au­ton­o­mous weapons.

When one con­sid­ers the mat­ter from the stand­point of moral­ity rather than ef­fi­cacy, it is no longer good enough to say, as care­ful thinkers like Evan Ack­er­man have said, that “no let­ter, UN dec­la­ra­tion, or even a for­mal ban rat­i­fied by mul­ti­ple na­tions is go­ing to pre­vent peo­ple from be­ing able to build au­ton­o­mous, weaponized robots.”

We know that. But that is not the point.

Del­e­gat­ing life-and-death de­ci­sions to ma­chines crosses a fun­da­men­tal moral line — no mat­ter which side builds or uses them. Play­ing Rus­sian roulette with the lives of oth­ers can never be jus­ti­fied merely on the ba­sis of ef­fi­cacy. This is not only a fun­da­men­tal is­sue of hu­man rights. The de­ci­sion whether to ban or en­gage killer robots goes to the core of our hu­man­ity.

The Supreme Court of Canada has had oc­ca­sion to con­sider the role of ef­fi­cacy in de­ter­min­ing whether to up­hold a ban in other con­texts. I con­cur with Jus­tice Charles Gon­thier, who as­tutely opined:

“(T)he ac­tual ef­fect of bans … is in­creas­ingly neg­li­gi­ble given tech­no­log­i­cal ad­vances which make the bans dif­fi­cult to en­force. With all due re­spect, it is wrong to sim­ply throw up our hands in the face of such dif­fi­cul­ties. These dif­fi­cul­ties sim­ply demon­strate that we live in a rapidly chang­ing global com­mu­nity where reg­u­la­tion in the public in­ter­est has not al­ways been able to keep pace with change. Cur­rent na­tional and in­ter­na­tional reg­u­la­tion may be in­ad­e­quate, but fun­da­men­tal prin­ci­ples have not changed nor have the value and ap­pro­pri­ate­ness of tak­ing pre­ven­tive mea­sures in highly ex­cep­tional cases.”

Killer robots are a highly ex­cep­tional case.

Rather than ask­ing whether we want to be part of the steam­roller or part of the road, the open let­ter chal­lenges our re­search com­mu­ni­ties to pave al­ter­na­tive path­ways. As the let­ter states: “Au­ton­o­mous weapons se­lect and en­gage tar­gets with­out hu­man in­ter­ven­tion.

In my view, per­haps the chief virtue of the open let­ter is its im­plicit recog­ni­tion that sci­en­tific wis­dom posits lim­its. This is some­thing Ein­stein learned the hard way, prompt­ing his sub­se­quent hu­man­i­tar­ian ef­forts with the Emer­gency Com­mit­tee of Atomic Sci­en­tists. Another im­por­tant sci­en­tist, Carl Sa­gan, ar­tic­u­lated this in­sight with stun­ning, poetic clar­ity:

“It might be a fa­mil­iar pro­gres­sion, tran­spir­ing on many worlds – a planet, newly formed, placidly re­volves around its star; life slowly forms; a kalei­do­scopic pro­ces­sion of crea­tures evolves; in­tel­li­gence emerges which, at least up to a point, con­fers enor­mous sur­vival value; and then tech­nol­ogy is in­vented. It dawns on them that there are such things as laws of Na­ture, that these laws can be re­vealed by experiment, and that knowl­edge of these laws can be made both to save and to take lives, on un­prece­dented scales.

Science, they rec­og­nize, grants im­mense pow­ers. In a flash, they cre­ate worl­dal­ter­ing con­trivances. Some plan­e­tary civ­i­liza­tions see their way through, place lim­its on what may and what must not be done, and safely pass through the time of per­ils. Oth­ers, not so lucky or so pru­dent, per­ish.”

Rec­og­niz­ing the eth­i­cal wis­dom of set­ting lim­its and liv­ing up to de­mands the of moral­ity is dif­fi­cult enough. Fig­ur­ing out the prac­ti­cal means nec­es­sary to en­trench those lim­its will be even tougher. But it is our obli­ga­tion to try.

I am a Cana­dian par­tic­i­pant in the global Cam­paign To Stop Killer Robots (co-or­di­nated by Hu­man Rights Watch in col­lab­o­ra­tion with eight other na­tional and in­ter­na­tional NGOs). Ian Kerr

CARL COURT/AFP/GETTY IM­AGES

A mock ‘killer ro­bot’ is pic­tured in cen­tral Lon­don, Eng­land, dur­ing the 2013 launch­ing of the Cam­paign to Stop Killer Robots, which calls for the ban of lethal ro­bot weapons that would be able to se­lect and at­tack tar­gets with­out any hu­man in­ter­ven­tion.

Comments

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.