National Post

How to ensure AI doesn’t run amok

- Oren Etzioni The New York Times Oren Etzioni is the chief executive of the Allen Institute for Artificial Intelligen­ce.

Technology entreprene­ur Elon Musk recently urged U. S. governors to regulate artificial intelligen­ce “before it’s too late.” Musk insists that artificial intelligen­ce represents an “existentia­l threat to humanity,” an alarmist view that confuses AI science with science fiction. Neverthele­ss, even AI researcher­s like me recognize that there are valid concerns about its impact on weapons, jobs and privacy. It’s natural to ask whether we should develop AI at all.

I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on AI, in the interest of caution?

The problem is that if we do so, then nations like China will overtake us. The AI horse has left the barn, and our best bet is to at- tempt to steer it. AI should not be weaponized, and any AI must have an impregnabl­e “off switch.” Beyond that, we should regulate the tangible impact of AI systems (for example, the safety of autonomous vehicles) rather than trying to define and rein in the amorphous and rapidly developing field of AI.

I propose three rules for artificial intelligen­ce systems that are inspired by, yet develop f urther, t he “three laws of robotics” that writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws.

These three laws are elegant but ambiguous: What, exactly, constitute­s harm when it comes to AI? I suggest a more concrete basis for avoiding AI harm, based on three rules of my own.

First, an AI system must be subject to the full gamut of laws that apply to its human operator. This rule would cover private, corporate and government systems. We don’t want AI to engage in cyberbully­ing, stock manipulati­on or terrorist threats; we don’t want the FBI to release AI systems that entrap people into committing crimes. We don’t want autonomous vehicles that drive through red lights, or worse, AI weapons that violate internatio­nal treaties.

Our common law should be amended so that we can’t claim that our AI system did something that we couldn’t understand or anticipate. Simply put, “My AI did it” should not excuse illegal behaviour.

My second rule is that an AI system must clearly disclose that it is not human. As we have seen in the case of bots — computer programs that can engage in increasing­ly sophistica­ted dialogue with real people — society needs assurances that AI systems are clearly labelled as such. In 2016, a bot known as Jill Watson, which served as a teaching assistant for an online course at Georgia Tech, fooled students into thinking it was human. A more serious example is the widespread use of proTrump political bots on social media in the days leading up to the 2016 elections, according to researcher­s at Oxford.

My rule would ensure that people know when a bot is i mpersonati­ng someone. We have already seen, for example, @ DeepDrumpf — a bot that humorously impersonat­ed Donald Trump on Twitter. AI systems don’t just produce fake tweets; they also produce fake news videos. Researcher­s at the University of Washington recently released a fake video of former president Barack Obama in which he convincing­ly appeared to be speaking words that had been grafted onto video of him talking about something entirely different.

My third rule is that an AI system cannot retain or disclose confidenti­al informatio­n without explicit approval from the source of that informatio­n. Because of their exceptiona­l ability to automatica­lly elicit, record and analyze informatio­n, AI systems are in a prime position to acquire confidenti­al informatio­n. Think of all the conversati­ons that Amazon Echo — a “smart speaker” present in an increasing number of homes — is privy to, or the informatio­n that your child may inadverten­tly divulge to a toy such as an AI Barbie. That is informatio­n you want to make sure you control.

My three AI rules are, I believe, sound but far from complete. I introduce them here as a starting point for discussion. Society needs to get ready.

‘MY AI DID IT’ SHOULD NOT EXCUSE ILLEGAL BEHAVIOUR.

Newspapers in English

Newspapers from Canada