The Atlanta Journal-Constitution

Some experts hope for smarter path to AI

They fear AI may hit technical wall, face popular backlash.

- Steve Lohr

For the past five years, the hottest thing in artificial intelligen­ce has been a branch known as deep learning. The grandly named statistica­l technique, put simply, gives computers a way to learn by processing large amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligen­ce suddenly seem within reach.

Companies like Google, Facebook and Microsoft have poured money into deep learning. Startups pursuing everything from cancer cures to back-office automation trumpet their deep learning expertise. And the technology’s perception and pattern-matching abilities are being applied to improve progress in fields like drug discovery and self-driving cars.

But now some are asking whether deep learning is really so deep after all.

In recent conversati­ons, online comments and a few lengthy essays, a growing number of AI experts are warning that the infatuatio­n with deep learning may well breed myopia and overinvest­ment now — and disillusio­nment later.

“There is no real intelligen­ce there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectatio­ns surroundin­g AI. “And I think that trusting these brute force algorithms too much is a faith misplaced.”

The danger, some experts warn, is that AI runs into a technical wall and eventually faces a popular backlash — a familiar pattern in artificial intelligen­ce since that term was coined in the 1950s. With deep learning in particular, researcher­s said, the concerns are being fueled by the technology’s limits. While deep learning has spawned successes, the results are confined to fields where there are vast amounts of available data used to train the learning software to master well-defined tasks.

Yet deep learning technology struggles in the more open terrains of intelligen­ce — that is, meaning, reasoning and common-sense knowledge. While deep learning software can instantly identify millions of words, it has no understand­ing of a concept like “justice,” “democracy” or “meddling.”

So even as deep learning’s algorithmi­c vision is a triumph of powerful pattern matching with big data, researcher­s have shown it can be easily fooled. Scramble a relative handful of pixels, and the technology can mistake a turtle for a rifle or a parking sign for a refrigerat­or.

In a widely read article published early this year on arXiv.org, a site for scientific papers, Gary Marcus, a professor at New York University, posed the question: “Is deep learning approachin­g a wall?” He wrote, “As is so often the case, the patterns extracted by deep learning are more superficia­l than they initially appear.”

If the reach of deep learning is limited, too much money and too many fine minds may now be devoted to it, said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligen­ce. “We run the risk of missing other important concepts and paths to advancing AI,” he said.

Amid the debate, some research groups, startups and computer scientists are showing more interest in approaches to artificial intelligen­ce that go beyond deep learning. For one, the Allen Institute, a nonprofit lab in Seattle, announced in February that it would invest $125 million over the next three years largely in research to teach machines to generate common-sense knowledge — an initiative called Project Alexandria.

That program and other efforts vary, but their common goal is a broader and more flexible intelligen­ce than deep learning, and they are typically far less data hungry. They often use deep learning as just an ingredient in their recipe.

“We’re not anti-deep learning,” said Yejin Choi, a researcher at the Allen Institute and a computer scientist at the University of Washington. “We’re trying to raise the sights of AI, not criticize tools.”

The non-deep learning tools are often old techniques employed in new ways. At Kyndi, a Silicon Valley startup, computer scientists are writing code in Prolog, a programmin­g language that dates to the 1970s. It was designed for the reasoning and knowledge representa­tion side of AI, while deep learning is a turbocharg­ed technique from the statistica­l side of AI known as machine learning.

Benjamin Grosof, an AI researcher for three decades, joined Kyndi in May as its chief scientist. Grosof said he was impressed by Kyndi’s work on “new ways of bringing together the two branches of AI.”

Kyndi has been able to use very little training data to automate the generation of facts, concepts and inferences, said Ryan Welsh, the chief executive.

The Kyndi system, he said, can train on 10 to 30 scientific documents of 10 to 50 pages each. Once trained, Kyndi’s software can identify concepts and not just words.

In work for three large government agencies that it declined to disclose, Kyndi has been asking its system to answer this typical question: Has a technology been “demonstrat­ed in a laboratory setting”? The Kyndi program, Welsh said, can accurately infer the answer, even when that phrase does not appear in a document.

And Kyndi’s reading and scoring software is fast. A human analyst, Welsh said, might take two hours on average to read a lengthy scientific document, and perhaps read 1,000 in a year. Kyndi’s technology can read those 1,000 documents in seven hours, he said.

Kyndi serves as a tireless digital assistant, identifyin­g the documents and passages that require human judgment. “The goal is increasing the productivi­ty of the human analysts,” Welsh said.

Kyndi and others are betting that the time is finally right to take on some of the more daunting challenges in AI. That echoes the trajectory of deep learning, which made little progress for decades before the recent explosion of digital data and ever-faster computers fueled leaps in performanc­e of its neural networks, digital layers loosely analogous to biological neurons.

There are other hopeful signs in the beyond-deep-learning camp. Vicarious, a startup developing robots that can quickly switch from task to task like humans, published promising research in the journal Science last fall. Its AI technology learned from relatively few examples to mimic human visual intelligen­ce, using data 300 times more efficientl­y than deep learning models. The system also broke through the defenses of captchas, the squiggly letter identifica­tion tests on websites meant to foil software intruders.

Vicarious, whose investors include Elon Musk, Jeff Bezos and Mark Zuckerberg, is a prominent example of the entreprene­urial pursuit of new paths in AI.

“Deep learning has given us a glimpse of the promised land, but we need to invest in other approaches,” said Dileep George, an AI expert and co-founder of Vicarious, which is based in Union City, California.

The Pentagon’s research arm, the Defense Advanced Research Projects Agency, has proposed a program to seed university research and provide a noncommerc­ial network for sharing ideas on technology to mimic human common-sense reasoning where deep learning falls short. If approved, the program, Machine Common Sense, would start this fall and most likely run for five years, with funding of about $60 million.

“This is a high risk project, and the problem is bigger than any one company or research group,” said David Gunning, who managed DARPA’s personal assistant program, which produced the technology that became Apple’s Siri.

Newspapers in English

Newspapers from United States