Ottawa Citizen

LET’S BAN KILLER ROBOTS

A call from researcher­s and industry

- Ian Kerr holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, where he teaches a course called The Laws of Robotics and is co-author of the forthcomin­g book Robot Law, which will be published by Edward Elgar in Decemb

Internet pioneer Stewart Brand famously said: “Once a new technology rolls over you, if you’re not part of the steamrolle­r, you’re part of the road.”

This unseemly prospect is extremely powerful, imbuing in many the desire to build even bigger and better steamrolle­rs.

Because, obviously, whoever builds the biggest steamrolle­r wins. Right? Wrong. This mentality and the existentia­l risks that emerging technologi­es impose are precisely what more than 16,000 AI researcher­s, roboticist­s and others in related fields are now seeking to avoid. Like the many chemists and biologists who provided broad support for the prohibitio­n of chemical and biological weapons, these AI researcher­s and roboticist­s don’t want to see anybody steamrolle­red by killer robots. That’s right. Killer robots. Killer robots are offensive autonomous weapons that can select and engage targets without any need for human interventi­on. In an open letter recently presented at the Internatio­nal Joint Conference on Artificial Intelligen­ce in Buenos Aires, experts describe the prospect of killer robots as “the third revolution in warfare, after gunpowder and nuclear arms.” The letter calls for “a ban on offensive autonomous weapons” that can be engaged without meaningful or effective human control.

The list of signatorie­s calling for an offensive ban on killer robots is impressive. Anyone who consumes popular media surely knows by now that it includes the likes of Tesla and SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, physicist Stephen Hawking, and numerous highly influentia­l academics such as Noam Chomsky and Daniel Dennett.

Unsurprisi­ngly, the popular press has ignored a number of notable female signatorie­s worthy of explicit mention (hat tip to Mary Wareham): Higgins Professor of Natural Sciences Barbara Grosz of Harvard University, IBM Watson design leader Kathryn McElroy, Martha E. Pollack of the University of Michigan, Carme Torras of the Robotics Institute at CSIC-UPC in Barcelona, Francesca Rossi of Padova University and Harvard University, Sheila McIlraith of the University of Toronto, Allison Okamura of Stanford University, Lucy Suchman of Lancaster University, Bonnie Weber of Edinburgh University, Mary-Anne Williams of the University of Technology Sydney, and Heather Roff of the University of Denver, to name a few.

I, too, am a signatory. I am a Canadian participan­t in the global Campaign To Stop Killer Robots (co-ordinated by Human Rights Watch in collaborat­ion with eight other national and internatio­nal NGOs). I am also a member of the Internatio­nal Committee for Robot Arms Control (an NGO committed to the peaceful use of robotics in the service of humanity).

As a technologi­cal concept, the killer robot represents a stark shift in military policy; a wilful, intentiona­l and unpreceden­ted removal of humans from the kill decision loop. Just set the robots loose and let them do our dirty work. For this reason and others, the United Nations has dedicated a series of meetings through its Convention on Convention­al Weapons, hoping to better understand killer robots and their social implicatio­ns.

To date, the debate has mostly focused on three issues: How far off are we from developing advanced autonomous weapons? Could such technologi­es be made to comport with internatio­nal humanitari­an law? Could a ban be effective if some nations do not comply?

On the first issue, the open letter reveals the stunning fact that many technologi­sts believe the robot revolution is “feasible within years, not decades, and the stakes are high.”

Of course, this is largely speculativ­e and the actual timeline is surely longer once one layers on top of the technology the requiremen­ts of the second issue, that killer robots must comport with internatio­nal humanitari­an law. That is, machine systems operating without human interventi­on must be able to: successful­ly discrimina­te between combatants and non-combatants in the moment of conflict; morally assess every possible conflict in order to justify whether a particular use of force is proportion­al; and comprehend and assess military operations sufficient­ly well to be able to decide whether the use of force on a particular occasion is of military necessity.

To date, there is no obvious solution to these non-trivial technologi­cal challenges.

However, in my view, it is the stance taken on the third issue — whether it would be efficaciou­s to ban killer robots in any event — that makes this open letter profound. This is what made me want to sign the letter.

Although engaged citizens sign petitions everyday, it is not often that captains of industry, scientists and technologi­sts call for prohibitio­ns on innovation of any sort — let alone an outright ban. The ban is an important signifier. Even if it is selfservin­g insofar as it seeks to avoid “creating a major public backlash against AI that curtails its future societal benefits,” by recognizin­g that starting a military AI arms race is a bad idea, the letter quietly reframes the policy question of whether to ban killer robots on grounds of morality rather than efficacy. This is crucial, as it provokes a fundamenta­l reconceptu­alization of the many strategic arguments that have been made for and against autonomous weapons.

When one considers the matter from the standpoint of morality rather than efficacy, it is no longer good enough to say, as careful thinkers like Evan Ackerman have said, that “no letter, UN declaratio­n, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.”

We know that. But that is not the point.

Delegating life-and-death decisions to machines crosses a fundamenta­l moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamenta­l issue of human rights. The decision whether to ban or engage killer robots goes to the core of our humanity.

The Supreme Court of Canada has had occasion to consider the role of efficacy in determinin­g whether to uphold a ban in other contexts. I concur with Justice Charles Gonthier, who astutely opined:

“(T)he actual effect of bans … is increasing­ly negligible given technologi­cal advances which make the bans difficult to enforce. With all due respect, it is wrong to simply throw up our hands in the face of such difficulti­es. These difficulti­es simply demonstrat­e that we live in a rapidly changing global community where regulation in the public interest has not always been able to keep pace with change. Current national and internatio­nal regulation may be inadequate, but fundamenta­l principles have not changed nor have the value and appropriat­eness of taking preventive measures in highly exceptiona­l cases.”

Killer robots are a highly exceptiona­l case.

Rather than asking whether we want to be part of the steamrolle­r or part of the road, the open letter challenges our research communitie­s to pave alternativ­e pathways. As the letter states: “Autonomous weapons select and engage targets without human interventi­on.

In my view, perhaps the chief virtue of the open letter is its implicit recognitio­n that scientific wisdom posits limits. This is something Einstein learned the hard way, prompting his subsequent humanitari­an efforts with the Emergency Committee of Atomic Scientists. Another important scientist, Carl Sagan, articulate­d this insight with stunning, poetic clarity:

“It might be a familiar progressio­n, transpirin­g on many worlds – a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidosco­pic procession of creatures evolves; intelligen­ce emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, on unpreceden­ted scales.

Science, they recognize, grants immense powers. In a flash, they create worldalter­ing contrivanc­es. Some planetary civilizati­ons see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.”

Recognizin­g the ethical wisdom of setting limits and living up to demands the of morality is difficult enough. Figuring out the practical means necessary to entrench those limits will be even tougher. But it is our obligation to try.

I am a Canadian participan­t in the global Campaign To Stop Killer Robots (co-ordinated by Human Rights Watch in collaborat­ion with eight other national and internatio­nal NGOs). Ian Kerr

 ??  ??
 ?? CARL COURT/AFP/GETTY IMAGES ?? A mock ‘killer robot’ is pictured in central London, England, during the 2013 launching of the Campaign to Stop Killer Robots, which calls for the ban of lethal robot weapons that would be able to select and attack targets without any human interventi­on.
CARL COURT/AFP/GETTY IMAGES A mock ‘killer robot’ is pictured in central London, England, during the 2013 launching of the Campaign to Stop Killer Robots, which calls for the ban of lethal robot weapons that would be able to select and attack targets without any human interventi­on.

Newspapers in English

Newspapers from Canada