USA TODAY International Edition
Artificial intelligence
Doomsday scenario or age of wonder?
SAN FRANCISCO – Artificial intelligence. Machine learning. Knowledge engineering.
Call it what you want, but AI by any name had the tech world uniquely divided in 2017, and the new year isn’t likely to bring any quick resolutions.
In case you missed it, the fiery debate over AI’s potential impact on society was encapsulated by the opinions of two bold-face Silicon Valley names.
Tesla and SpaceX CEO Elon Musk told the National Governors Association this fall that his exposure to AI technology suggests it poses “a fundamental risk to the existence of human civilization.”
Facebook founder Mark Zuckerberg parried such doomsday talk — which would include cosmologist Stephen Hawking’s view that AI could prove “the worst event in the history of civilization” — with a video post calling such negative talk “pretty irresponsible.”
As the war of words raged, AI continued its creep into our daily lives, from the new facial recognition software in Apple’s iPhone X to the increasingly savvy responses from digital assistants Siri, Alexa and Cortana.
With the amount of often personal information fed by consumers into cloud-based brains compounding exponentially, companies such as Facebook and Google are poised to have unprecedented insights into, and leverage over, our lives.
So which is it — are we heading into a glorious tech-enabled future where many menial tasks will be handled by savant machines, or one where the robots will have taken over for us woefully underpowered humans?
USA TODAY reached out to a number of artificial intelligence stakeholders to get their view on AI, friend or foe.
The conclusion: Excitement over AI’s potentially positive impacts seems, for now, adequately tempered by an acknowledgement that scientists need to stay vigilant about how such technology is developed, to ensure bias is eliminated and control is retained.
AI watchdog groups on the rise
“Innovation has generally liberated humans to be more productive,” says Rep. John Delaney, D-Md. Last fall, along with colleague Pete Olson, R-Texas, Delaney launched the AI Caucus, whose mission is to inform policymakers about the technological, economic and social impacts of AI.
Delaney says there are “a million conversations that can happen between now and the Terminator arriving,” referring to the film in which machines attempt to exterminate humans.
Although he says he is particularly concerned about retraining workers for an AI-rife future, “I don’t prescribe to a doomsday scenario.”
There are in fact a growing number of groups being formed to try to ensure that dismal future never comes to pass.
These include AI Now, which is led by New York University researcher Kate Crawford, who last year warned attendees at SXSW of the possible rise of fascist AI. There’s also OpenAI, a Muskbacked research outfit, and the Partnership on AI, whose members include Google, Facebook, Amazon, IBM and Microsoft, though notably not Apple.
Apple, Facebook and Amazon declined to provide an executive to speak on the record on AI’s pros and cons.
Eric Horvitz, who heads Microsoft Research, says the company last summer created an internal review board called Aether — AI and Ethics in Engineering and Research — that is tasked with closely monitoring progress not just in machine learning but also fields such as object recognition and emotion detection.
Another organization vowing to tackle AI’s dark side is the recently formed DeepMind Ethics & Society research group, which aims to publish papers focused on some of the most vexing issues posed by AI. London-based DeepMind was bought by Google in 2014 to expand its own AI work.
One of the group’s key members is Nick Bostrom, the Swedish-born Oxford University professor whose 2014 book, Superintelligence: Paths, Dangers, Strategies, first caused Musk to caution against AI’s dangers.
Woz: From AI skeptic to fan
Apple co-founder Steve Wozniak initially found himself in the AI-wary camp. He, like Musk and Hawking, was concerned that machines with humanlike consciousness could eventually pose a risk to homo sapiens.
But then he changed his thinking, based largely on the notion that humans still remain perplexed by how the brain works its magic, which in turn means that it would be difficult for scientists to create machines that can think like us.
“We may have machines now that simulate intelligence, but that’s different from truly replicating how the brain works,” says Wozniak. “If we don’t understand things like where memories are stored, what’s the point of worrying about when the Singularity is going to take over and run everything?”
Singularity refers to the moment in which machines become so intelligent they are able to run and upgrade
themselves.
Most consumers wary
Results of a Pew Research Center poll released in October found that between half and threequarters of respondents considered themselves “worried” when asked about AI’s impact on doing human jobs (72%), evaluating job candidates (67%), building self-driving cars (54%) and caring for the elderly (47%).
A SurveyMonkey poll on AI conducted for USA TODAY also had overtones of concern, with 73% of respondents saying that would prefer if AI was limited in the rollout of newer tech so that it doesn’t become a threat to humans.
U.S. researchers have no choice but to push forward with AI developments because inaction is not an alternative, says Oren Etzioni, CEO of the Allen Institute for AI, which was started by Microsoft co-founder Paul Allen.
“AI may seem threatening,” he says, “but hitting the pause button is not realistic.”
“AI may seem threatening, but hitting the pause button is not realistic.”
Oren Etzioni CEO of the Allen Institute for AI