>ARE FEARS ABOUT ARTIFICIAL INTELLIGENCE OVERBLOWN?
Evil artificial intelligence is a mainstay of science fiction, from Terminator’s Skynet to 2001: A Space Odyssey’s HAL.
But as machine learning has boomed, a chorus of scientists concerned with keeping AI “safe” has emerged. Its loudest voices certainly aren’t crackpots: Stephen Hawking told the BBC artificial intelligence could “spell the end of the human race,” and Elon Musk, the Tesla and SpaceX entrepreneur, has called AI an “existential threat.”
The reaction to these comments from scientists who actually work on AI — Hawking and AI both have physics backgrounds — ranges from puzzlement to contempt, considering that the best neural nets still mistake concerts for spiders.
“It’s good to have some people considering the ethics and implications of this sort of thing, but it’s not something I’m worried about any time in the next, say, 40 years,” says Google senior fellow Jeff Dean. “We have a lot of work to do to get really important useful capabilities into people’s hands — self-driving cars are going to save an enormous number of lives.”
Stanford University’s Andrew Ng compares worrying about AI destroying humanity to worrying about overpopulation on Mars: it could happen, but it’s such a distant problem there’s no practical way to set about fixing it right now.
The doomsday scenarios also distract from ethical issues that dog AI already.
The National Security Agency in the U.S. has a huge amount of data at its fingertips. It would be shocking if it wasn’t using neural networks to make sense of it. The U.S. Department of Defence continues to fund AI research: how much autonomy can we as a society comfortably transfer to intelligent drones or robots? Appropriate boundaries for lethal autonomous weapons systems are an ongoing international debate. And if you’re already uncomfortable with ads that pick up keywords from your Facebook posts and email correspondence, you might not look forward to those systems getting smarter.
Then there’s the job question. Traditional computing replaced many menial tasks; neural nets are adept at navigating deep reservoirs of knowledge. Startups such as San Francisco-based Enlitic believe that deep learning algorithms can do a better, faster job of reading medical scans than the best-trained human beings. Is that a good or bad thing? Some in the field believe that artificial intelligence will augment, not replace: algorithms will free us from rote tasks like memorizing reams of legal precedents and allow us to pursue the higher-order thinking our massive brains are capable of. Others think the only tasks machines can’t do better are creative ones.
Artificial intelligence, like the Internet or genomic science, is a general-purpose technology. Whether it is used for good or ill is up to us. But it probably won’t turn on us any time soon.