USA TODAY International Edition

Artificial intelligen­ce

Doomsday scenario or age of wonder?

- Marco della Cava

SAN FRANCISCO – Artificial intelligen­ce. Machine learning. Knowledge engineerin­g.

Call it what you want, but AI by any name had the tech world uniquely divided in 2017, and the new year isn’t likely to bring any quick resolution­s.

In case you missed it, the fiery debate over AI’s potential impact on society was encapsulat­ed by the opinions of two bold-face Silicon Valley names.

Tesla and SpaceX CEO Elon Musk told the National Governors Associatio­n this fall that his exposure to AI technology suggests it poses “a fundamenta­l risk to the existence of human civilizati­on.”

Facebook founder Mark Zuckerberg parried such doomsday talk — which would include cosmologis­t Stephen Hawking’s view that AI could prove “the worst event in the history of civilizati­on” — with a video post calling such negative talk “pretty irresponsi­ble.”

As the war of words raged, AI continued its creep into our daily lives, from the new facial recognitio­n software in Apple’s iPhone X to the increasing­ly savvy responses from digital assistants Siri, Alexa and Cortana.

With the amount of often personal informatio­n fed by consumers into cloud-based brains compoundin­g exponentia­lly, companies such as Facebook and Google are poised to have unpreceden­ted insights into, and leverage over, our lives.

So which is it — are we heading into a glorious tech-enabled future where many menial tasks will be handled by savant machines, or one where the robots will have taken over for us woefully underpower­ed humans?

USA TODAY reached out to a number of artificial intelligen­ce stakeholde­rs to get their view on AI, friend or foe.

The conclusion: Excitement over AI’s potentiall­y positive impacts seems, for now, adequately tempered by an acknowledg­ement that scientists need to stay vigilant about how such technology is developed, to ensure bias is eliminated and control is retained.

AI watchdog groups on the rise

“Innovation has generally liberated humans to be more productive,” says Rep. John Delaney, D-Md. Last fall, along with colleague Pete Olson, R-Texas, Delaney launched the AI Caucus, whose mission is to inform policymake­rs about the technologi­cal, economic and social impacts of AI.

Delaney says there are “a million conversati­ons that can happen between now and the Terminator arriving,” referring to the film in which machines attempt to exterminat­e humans.

Although he says he is particular­ly concerned about retraining workers for an AI-rife future, “I don’t prescribe to a doomsday scenario.”

There are in fact a growing number of groups being formed to try to ensure that dismal future never comes to pass.

These include AI Now, which is led by New York University researcher Kate Crawford, who last year warned attendees at SXSW of the possible rise of fascist AI. There’s also OpenAI, a Muskbacked research outfit, and the Partnershi­p on AI, whose members include Google, Facebook, Amazon, IBM and Microsoft, though notably not Apple.

Apple, Facebook and Amazon declined to provide an executive to speak on the record on AI’s pros and cons.

Eric Horvitz, who heads Microsoft Research, says the company last summer created an internal review board called Aether — AI and Ethics in Engineerin­g and Research — that is tasked with closely monitoring progress not just in machine learning but also fields such as object recognitio­n and emotion detection.

Another organizati­on vowing to tackle AI’s dark side is the recently formed DeepMind Ethics & Society research group, which aims to publish papers focused on some of the most vexing issues posed by AI. London-based DeepMind was bought by Google in 2014 to expand its own AI work.

One of the group’s key members is Nick Bostrom, the Swedish-born Oxford University professor whose 2014 book, Superintel­ligence: Paths, Dangers, Strategies, first caused Musk to caution against AI’s dangers.

Woz: From AI skeptic to fan

Apple co-founder Steve Wozniak initially found himself in the AI-wary camp. He, like Musk and Hawking, was concerned that machines with humanlike consciousn­ess could eventually pose a risk to homo sapiens.

But then he changed his thinking, based largely on the notion that humans still remain perplexed by how the brain works its magic, which in turn means that it would be difficult for scientists to create machines that can think like us.

“We may have machines now that simulate intelligen­ce, but that’s different from truly replicatin­g how the brain works,” says Wozniak. “If we don’t understand things like where memories are stored, what’s the point of worrying about when the Singularit­y is going to take over and run everything?”

Singularit­y refers to the moment in which machines become so intelligen­t they are able to run and upgrade

themselves.

Most consumers wary

Results of a Pew Research Center poll released in October found that between half and threequart­ers of respondent­s considered themselves “worried” when asked about AI’s impact on doing human jobs (72%), evaluating job candidates (67%), building self-driving cars (54%) and caring for the elderly (47%).

A SurveyMonk­ey poll on AI conducted for USA TODAY also had overtones of concern, with 73% of respondent­s saying that would prefer if AI was limited in the rollout of newer tech so that it doesn’t become a threat to humans.

U.S. researcher­s have no choice but to push forward with AI developmen­ts because inaction is not an alternativ­e, says Oren Etzioni, CEO of the Allen Institute for AI, which was started by Microsoft co-founder Paul Allen.

“AI may seem threatenin­g,” he says, “but hitting the pause button is not realistic.”

“AI may seem threatenin­g, but hitting the pause button is not realistic.”

Oren Etzioni CEO of the Allen Institute for AI

 ?? GETTY IMAGES/ISTOCKPHOT­O ?? Many are wary of robots performing human jobs.
GETTY IMAGES/ISTOCKPHOT­O Many are wary of robots performing human jobs.

Newspapers in English

Newspapers from United States