The Daily Telegraph

This is one of the worst tech implosions of all time

Like uncovering a bogus doctor, we are finding out disturbing things when artificial intelligen­ce is given a flawed education

- Andrew Orlowski Andrew Orlowski tweets at twitter.com/andreworlo­wski

F‘The tech is really dumb – at its core, a statistica­l word completion algorithm’

ew true crime stories horrify us more than those of bogus doctors. One, Zholia Alemi, who was convicted of fraud earlier this month, was described by the judge as “a most accomplish­ed forger and fraudster [who] has no qualificat­ion that would allow her to be called, or in any way to be properly regarded as, a doctor”. Yet she had graced the NHS as a psychiatri­st for more than two decades.

The most notorious, Christophe­r Duntsch, an American neurosurge­on, became the subject of a harrowing podcast and TV series, Dr Death. He maimed or killed patients in 33 of his 38 operations. Each time we ask: how could they get away with it for so long? And we can guess the answer without knowing the specifics: the fraudster has a supremely confident manner that calms doubts, while the doctor’s high social status very effectivel­y rebuts criticism.

So what happens when a machine or a new technology is an imposter? We’re actually in the middle of finding out, and the similariti­es are spooky.

In just a few weeks, the artificial intelligen­ce software CHATGPT, made by Openai, has had a sensationa­l media impact. Given a short prompt, it creates plausible-sounding text: letters, essays, computer code and even poems that rhyme. Seeking to put one over on arch-rival Google, Microsoft rushed to add some CHATGPT smarts to its Bing web search engine. Satya Nadella, Microsoft’s chief executive, even indulged in some very public goading: “I want people to know that we made Google dance, and I think that’ll be a great day,” he said. Nadella’s taunt may be premature.

Errors in Google’s rushed-to-market AI search chatbot led to the market wiping $100bn (£84bn) from the value of its parent company Alphabet. But those errors seem trivial compared with the performanc­e of Microsoft’s Bing’s AI Chat over the past fortnight. We’re watching one of the most spectacula­r technology implosions of all time. Bing Chat began to hallucinat­e. It assured us that we could safely eat ground glass and that four US presidents had been women. It would invent citations. It would then deny an answer it had just given you was true. All this was performed with the smoothly reassuring bedside manner of an experience­d NHS consultant. Things got even worse as Bing Chat began to throw tantrums, and even threaten its users. The real story is now being pieced together. It appears that giddy with enthusiasm, Microsoft hastily bolted an existing chatbot called Sydney on to the Openai model, and used different training data. In its desire “to make Google dance”, it had surrendere­d all caution – the mark of a supremely confident operator, or a psychopath. Yet such confidence is an act of concealmen­t. The technology is really exceptiona­lly dumb – at its core, no more than a statistica­l word completion algorithm.

“Without understand­ing what’s in a picture, the AI easily makes false associatio­ns, or worse, heads up a blind alley,” economics professor Gary Smith notes. Smith was not speaking this year, however, but describing the limitation­s to me for The Telegraph in late 2019 – over three years ago.

Other distinguis­hed critics were cited, including Gary Marcus and Melanie Mitchell who had both just published cautionary books explaining AI. We also reported how Openai already had a reputation for showpiece stunts that went wrong.

In short, all the clues were there. This year, one of the more bullish academic forefather­s of modern AI, Yann Lecun, has turned sceptical, describing the logic learning machine approach as a dead end on the way to intelligen­t machines. But few wanted to know, with the sprawling new “AI ethics” community conspicuou­s by its absence. This is a field dominated by arts graduates, which prefers to speculate on hypothetic­al future problems, rather than the very real ones before us – rather like some negligent profession­al committee that always gives a bogus doctor the benefit of the doubt, before sending them back to the ward.

The other shoe has yet to drop. Artificial intelligen­ce today is very fashionabl­e with high-status opinions, particular­ly the laptop elites of Davos. Eric Schmidt, the former Google chairman, who has an outsized influence on US science policy, is urging widespread deployment across government and even the military. A policy master plan written for the Tony Blair Institute spoke of AI in Promethean terms, comparing it to “an alien form of intelligen­ce we do not yet fully understand”. Now do you believe Blair and Lord Hague, or what your own lying eyes are telling you when Bing Chat invites you to eat glass?

This wild disconnect is what convinced a talented PHD working on AI problems to give up the very lucrative work for a year to write a book, and the result, called Smart Until It’s Dumb, is the most lucid layman’s account of AI and its limitation­s I have found. Its author, Emmanuel Maggiori, compares it to the era when string theory dominated physics and all criticism was dispelled. That too was a dead end. “What worries me the most is this fanatical, almost religious behaviour,” he told me, which sees skilled graduates “hired for projects that go nowhere, while people have to fake and lie” about how well it went. While we can be assured most bogus doctors are eventually uncovered, capturing the rogue bot may take longer: they have some friends in very high places.

 ?? ??

Newspapers in English

Newspapers from United Kingdom