Saskatoon StarPhoenix

HITTING A WALL?

Hype over artificial intelligen­ce may be cooling off

- JAMES TITCOMB

In 1964, an American computer scientist named John McCarthy set up a research centre at California’s Stanford University to explore an exciting new discipline: artificial intelligen­ce.

McCarthy helped coin the term several years earlier, and interest in the field was growing fast. By then, the first computer programs that could beat humans at chess had been developed, and thanks to plentiful government grants at the height of the Cold War, AI researcher­s were making rapid progress in other areas such as algebra and language translatio­n.

When he set up his laboratory, McCarthy told the paymasters who funded it that a fully intelligen­t machine could be built within a decade. Things did not pan out. Nine years after McCarthy ’s promises, and after millions more had been plowed into research around the world, the U.K. government asked the British mathematic­ian Sir James Lighthill to assess whether it was all worth it.

Lighthill’s conclusion, published in 1973, was damning.

“In no part of the field have the discoverie­s made so far produced the major impact that was then promised,” his report said. “Most workers in AI research and in related fields confess to a pronounced feeling of disappoint­ment.”

Academics criticized Lighthill for his skepticism, but the report triggered a collapse in government funding in the U.K. and elsewhere. It was seen as the catalyst for what became known as the first “AI winter” — a period of disillusio­nment and funding shortages in the field.

More than 50 years after McCarthy’s bold prediction­s, technologi­sts are once again drenched with optimism about artificial intelligen­ce. Venture capital funding for AI companies doubled in 2017 to US$12 billion, almost a 10th of the total investment, according to KPMG. In Europe alone, more than 1,000 companies have attracted venture funding since 2012, 10 times more than fields such as blockchain or virtual reality, according to the tech investor Atomico.

Giants such as Google and Microsoft are building their companies around AI. Earlier this year, Google chief executive Sundar Pichai called the technology “one of the most important things that humanity is working on,” adding: “It’s more profound than, I don’t know, electricit­y or fire.”

The rest of the corporate world is getting in on the act, too. An analysis of investor calls by U.S. public companies last year found the term “artificial intelligen­ce” was mentioned 791 times in the third quarter of 2017, up from almost nothing a few years earlier.

Significan­t breakthrou­ghs are promised. Driverless cars are often predicted within a decade. Rising global tensions are boosting government investment, particular­ly in China. Elsewhere, economists fret about widespread unemployme­nt. Others, such as the late Stephen Hawking, feared the rise of robot weapons could eradicate humanity.

But another kind of pessimism is also gaining traction. What if instead of being radically unprepared for the rise of the robots, we have drasticall­y overestima­ted the disruption caused by the recent excitement? What if, instead of being on the cusp of one of the greatest breakthrou­ghs in history, we are in a similar position to that of the 1970s, at the moment before the bubble bursts?

“The whole idea of making machines intelligen­t has been a long goal of computer scientists and, as long as we’ve been following it, AI has gone through these waves,” says Ronald Schmelzer of Cognilytic­a, an analyst firm focused on artificial intelligen­ce. “A lot of the claims (from the ’60s and ’70s) sound very familiar today. It seems to be one of those recurring patterns.”

Many of the recent breakthrou­ghs in AI have been along the same lines as the chess and language breakthrou­ghs of the ’50s and ’60s, if far more advanced versions. Two years ago, Google’s AI subsidiary DeepMind beat the world champion at Go, an ancient Chinese board game many times more complicate­d than chess.

In March, researcher­s at Microsoft said they created the first machine that could beat humans when it came to translatin­g Chinese to English.

The excitement about AI owes largely to two trends: the leap in number-crunching power that has been enabled by faster and more advanced processors and remote cloud computing systems, and an explosion in the amount of data available, from the billions of smartphone photos taken every day to the digitizati­on of records.

This combinatio­n, as well as the unpreceden­ted budgets at the disposal of Silicon Valley’s giants, has led to what researcher­s have long seen as the holy grail for AI: machines that learn.

While the idea of computer programs that can absorb informatio­n and use it to carry out a task, instead of having to be programmed, goes back decades, the technology has only recently caught up. But while the technology has proven adept at certain tasks, from superhuman prowess at video games to reliable voice recognitio­n, some experts are becoming skeptical about machine learning’s wider potential.

“AI is a classic example of the technology hype curve,” says Rob Kniaz, a partner at the investment firm Hoxton Ventures.

“Three or four years ago, people said it was going to solve every problem. The hype has gone down but it’s still way overblown. In most applicatio­ns it’s not going to put people out of work.”

Schmelzer says funding for AI companies is “a little bit overheated.”

“I can’t see it lasting,” he adds. “The sheer quantity of money is gigantic and in some ways ridiculous.”

Most AI skeptics point out the breakthrou­ghs achieved so far are in relatively narrow fields, with clearly defined structures and rules, such as games.

The rapid advancemen­t in these areas has led to prediction­s that computers are ready to surpass humans at all sorts of tasks, from driving to medical diagnosis.

But transposin­g prowess in games to the real world is another task altogether, something that became clear with fatal consequenc­es this year.

In March, a self-driving car being tested by Uber in Arizona failed to stop in front of Elaine Herzberg when the 49-year-old stepped into the street. She became the first person to be killed by a driverless vehicle, which was travelling at 61 km/h. The car’s systems spotted Herzberg six seconds before the crash, but failed to take action.

The incident was the most striking example yet that the grand promises made about AI just a few years ago were detached from reality. While driverless cars were once predicted to be widely available by 2020, many experts now believe them to be decades away.

Driverless cars have not been the only setback. AI’s potential to revolution­ize health care has been widely touted, and Theresa May said this year that AI would be a “new weapon” in fighting cancer.

The reality has been less promising. IBM’s Watson technology, an AI system that has promised major breakthrou­ghs in diagnosing cancer, has been accused of repeatedly misdiagnos­ing conditions. Shortly after the Uber crash, the AI researcher Filip Piekniewsk­i wrote that a new AI winter is “well on its way,” arguing breakthrou­ghs in machine learning had slowed down.

Schmelzer says companies have stopped placing blind faith in AI, pointing out comparison­s with the dotcom bubble when businesses demanded an internet presence even when it was unnecessar­y.

“It was technology for technology’s sake and there was a lot of wasted money. I think we started to see that (with AI).”

Kniaz, of Hoxton Ventures, agrees the bubble has started to deflate, saying while companies would often attract funding merely for mentioning artificial intelligen­ce in investor presentati­ons, they now have to prove it works.

However, he says even the narrow progress made in recent years has plenty of real-world uses, even if it is a long way from matching human intelligen­ce.

“We’re now at the point where it’s a little more sane,” Kniaz says. “It’s reaching a nice, stable point now. You’re seeing it applied to better problems.”

The hype has gonedown but it’s still way overblown. In most applicatio­ns it’s not going to put people out of work.

 ?? THE CANADIAN PRESS/FILES ?? Back in 2011, Jeopardy! champions Ken Jennings and Brad Rutter took on the supercompu­ter Watson, an artificial intelligen­ce system from IBM.
THE CANADIAN PRESS/FILES Back in 2011, Jeopardy! champions Ken Jennings and Brad Rutter took on the supercompu­ter Watson, an artificial intelligen­ce system from IBM.
 ?? ALEX UROSEVIC/FILES ?? While thinkers such as Stephen Hawking raised fears that robot weapons, such as those in the Terminator movie franchise, could eradicate humanity, others believe that bold prediction­s about driverless cars and other artificial intelligen­ce are overhyped and setting unrealisti­c expectatio­ns for society.
ALEX UROSEVIC/FILES While thinkers such as Stephen Hawking raised fears that robot weapons, such as those in the Terminator movie franchise, could eradicate humanity, others believe that bold prediction­s about driverless cars and other artificial intelligen­ce are overhyped and setting unrealisti­c expectatio­ns for society.

Newspapers in English

Newspapers from Canada