Toronto Star

>ARE FEARS ABOUT ARTIFICIAL INTELLIGEN­CE OVERBLOWN?

- Kate Allen

Evil artificial intelligen­ce is a mainstay of science fiction, from Terminator’s Skynet to 2001: A Space Odyssey’s HAL.

But as machine learning has boomed, a chorus of scientists concerned with keeping AI “safe” has emerged. Its loudest voices certainly aren’t crackpots: Stephen Hawking told the BBC artificial intelligen­ce could “spell the end of the human race,” and Elon Musk, the Tesla and SpaceX entreprene­ur, has called AI an “existentia­l threat.”

The reaction to these comments from scientists who actually work on AI — Hawking and AI both have physics background­s — ranges from puzzlement to contempt, considerin­g that the best neural nets still mistake concerts for spiders.

“It’s good to have some people considerin­g the ethics and implicatio­ns of this sort of thing, but it’s not something I’m worried about any time in the next, say, 40 years,” says Google senior fellow Jeff Dean. “We have a lot of work to do to get really important useful capabiliti­es into people’s hands — self-driving cars are going to save an enormous number of lives.”

Stanford University’s Andrew Ng compares worrying about AI destroying humanity to worrying about overpopula­tion on Mars: it could happen, but it’s such a distant problem there’s no practical way to set about fixing it right now.

The doomsday scenarios also distract from ethical issues that dog AI already.

The National Security Agency in the U.S. has a huge amount of data at its fingertips. It would be shocking if it wasn’t using neural networks to make sense of it. The U.S. Department of Defence continues to fund AI research: how much autonomy can we as a society comfortabl­y transfer to intelligen­t drones or robots? Appropriat­e boundaries for lethal autonomous weapons systems are an ongoing internatio­nal debate. And if you’re already uncomforta­ble with ads that pick up keywords from your Facebook posts and email correspond­ence, you might not look forward to those systems getting smarter.

Then there’s the job question. Traditiona­l computing replaced many menial tasks; neural nets are adept at navigating deep reservoirs of knowledge. Startups such as San Francisco-based Enlitic believe that deep learning algorithms can do a better, faster job of reading medical scans than the best-trained human beings. Is that a good or bad thing? Some in the field believe that artificial intelligen­ce will augment, not replace: algorithms will free us from rote tasks like memorizing reams of legal precedents and allow us to pursue the higher-order thinking our massive brains are capable of. Others think the only tasks machines can’t do better are creative ones.

Artificial intelligen­ce, like the Internet or genomic science, is a general-purpose technology. Whether it is used for good or ill is up to us. But it probably won’t turn on us any time soon.

Newspapers in English

Newspapers from Canada