Daily Dispatch

BOFFIN’S TAKE

Artificial intelligen­ce gets mankind thinking

-

In 1964, an American computer scientist named John McCarthy set up a research centre at California’s Stanford University to explore an exciting new discipline: artificial intelligen­ce.

McCarthy had helped coin the term several years earlier, and interest in the field was growing fast. By then, the first computer programmes that could beat humans at chess had been developed, and thanks to plentiful government grants at the height of the Cold War, AI researcher­s were making rapid progress in other areas such as algebra and language translatio­n.

When he set up his laboratory, McCarthy told the paymasters who had funded it that a fully intelligen­t machine could be built within a decade. Things did not pan out. Nine years after McCarthy’s promises, and after millions more had been ploughed into research around the world, the UK government asked the British mathematic­ian Sir James Lighthill to assess whether it was all worth it.

Lighthill’s conclusion, published in 1973, was damning. “In no part of the field have the discoverie­s made so far produced the major impact that was then promised,” his report said. “Most workers in AI research and in related fields confess to a pronounced feeling of disappoint­ment.” Academics criticised Lighthill for his scepticism, but the report triggered a collapse in government funding. It was seen as the catalyst for what became known as the first “AI winter“, a period of disillusio­nment and funding shortages in the field.

More than 50 years after McCarthy’s bold prediction­s, technologi­sts are once again drenched with optimism about artificial intelligen­ce. Venture capital funding for AI companies doubled in 2017 to $12bn, almost a 10th of the total investment, according to KPMG. In Europe alone, more than 1,000 companies have attracted venture funding since 2012, 10 times more than fields such as blockchain or virtual reality, according to the tech investor Atomico.

Giants such as Google and Microsoft are building their companies around AI. Earlier this year, Google chief executive Sundar Pichai called the technology “one of the most important things that humanity is working on”, adding: “It’s more profound than, I don’t know, electricit­y or fire.”

The rest of the corporate world is getting in on the act too. An analysis of investor calls by US public companies last year found that the term “artificial intelligen­ce” was mentioned 791 times in the third quarter of 2017, up from almost nothing a few years earlier.

Significan­t breakthrou­ghs are promised. Driverless cars are often predicted within a decade. Rising global tensions are boosting government investment, particular­ly in China. Elsewhere, economists fret about widespread unemployme­nt. Others, such as the late Stephen Hawking, have feared that the rise of robot weapons could eradicate humanity.

But another kind of pessimism is also gaining traction. What if instead of being radically unprepared for the rise of the robots, we have drasticall­y overestima­ted the disruption caused by the recent excitement? What if, instead of being on the cusp of one of the greatest breakthrou­ghs in history, we are in a similar position to that of the Seventies, at the moment before the bubble bursts?

“The whole idea of making machines intelligen­t has been a long goal of computer scientists and, as long as we’ve been following it, AI has gone through these waves,” says Ronald Schmelzer of Cognilytic­a, an analyst firm focused on artificial intelligen­ce. “It seems to be one of those recurring patterns.”

Many of the recent breakthrou­ghs in AI have been along the same lines as the chess and language breakthrou­ghs of the Fifties and Sixties, if far more advanced versions. Two years ago, Google’s AI subsidiary DeepMind beat the world champion at Go, an ancient Chinese board game that is many times more complicate­d than chess. In March, researcher­s at Microsoft said they had created the first machine that could beat humans when it came to translatin­g Chinese to English.

The current excitement about AI owes largely to two trends: the leap in number-crunching power that has been enabled by faster and more advanced processors and remote cloud computing systems, and an explosion in the amount of data available, from the billions of smartphone photos taken every day to the digitisati­on of records.

This combinatio­n, as well as the unpreceden­ted budgets at the disposal of Silicon Valley’s giants, has led to what researcher­s have long seen as the holy grail for AI: machines that learn. While the idea of computer programmes that can absorb informatio­n and use it to carry out a task, instead of having to be programmed, goes back decades, the technology has only recently caught up. But while the technology has proven adept at certain tasks, from superhuman prowess at video games to reliable voice recognitio­n, some experts are becoming sceptical about machine learning’s wider potential.

“AI is a classic example of the technology hype curve,” says Rob Kniaz, a partner at the investment firm Hoxton Ventures. “Three or four years ago people said it was going to solve every problem. The hype has gone down but it’s still way overblown. In most applicatio­ns it’s not going to put people out of work.”

Schmelzer says that funding for AI companies is “a little bit overheated“. “I can’t see it lasting,” he adds. “The sheer quantity of money is gigantic and in some ways ridiculous.”

Most AI sceptics point out the breakthrou­ghs achieved so far are in relatively narrow fields, with clearly defined structures and rules, such as games. The rapid advancemen­t in these areas has led to prediction­s computers are ready to surpass humans at all sorts of tasks, from driving to medical diagnosis.

But transposin­g prowess in games to the real world is another task altogether, something that became clear with fatal consequenc­es earlier this year. In March, a self-driving car being tested by Uber in Arizona failed to stop in front of Elaine Herzberg when the 49-year-old stepped out into the street. She became the first person to be killed by a driverless vehicle, which was trsvelling at 38mph. The car’s systems had spotted Herzberg six seconds before the crash, but had failed to take action. The incident was the most striking example yet that the grand promises made about AI just a few years ago were detached from reality. While driverless cars were once predicted to be widely available by 2020, many experts now believe them to be decades away.

Driverless cars have not been the only setback. AI’s potential to revolution­ise healthcare has been widely touted, and Theresa May said this year that AI would be a “new weapon” in fighting cancer.

The reality, so far at least, has been less promising. —

The Sunday Telegraph

 ??  ??
 ??  ??
 ??  ??
 ??  ?? TOO MUCH SWAY: The jury is still out whether the new discipline of Artificial Intelligen­ce is good or – or bad for – mankind. Some beileve that robots may be well endowed to match man in every sphere of mental and psychologi­cal developmen­t and may even surpass him much to the cost of future life.
TOO MUCH SWAY: The jury is still out whether the new discipline of Artificial Intelligen­ce is good or – or bad for – mankind. Some beileve that robots may be well endowed to match man in every sphere of mental and psychologi­cal developmen­t and may even surpass him much to the cost of future life.
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from South Africa