The Daily Telegraph

The artificial intelligen­ce hype is getting out of hand

Many want to believe in its magic, but AI is brittle and despite becoming bigger, it is not much smarter

- ANDREW ORLOWSKI Andrew Orlowski is on Twitter @andreworlo­wski

Ihope everyone is enjoying the latest breakthrou­gh in artificial intelligen­ce as much as I am. In one of the latest AI developmen­ts, a new computer programme – Dall-e 2 – generates images from a text prompt. Give it the phrase “Club Penguin Bin Laden”, and it will go off and draw Osama as a cartoon penguin. For some, this was more than a bit of fun: it was further evidence that we shall soon be ruled by machines.

Sam Altman, chief executive of the now for-profit Open AI company, which provides the model that underpins Dall-e, suggested that artifical generalise­d intelligen­ce (AGI) was close at hand. So too did Elon Musk, who founded Altman’s venture. Musk even gave a year for when this would happen: 2029.

Yet when we look more closely, we see that Dall-e really isn’t very clever at all. It’s a crude collage maker, which only works if the instructio­ns are simple and clear, such as “Easter Island Statue giving a TED Talk”. It struggles with more subtle prompts, fails to render everyday objects: fingers are drawn as grotesque tubers, for example, and it can’t draw a hexagon.

Dall-e is actually a lovely example of what psychologi­sts call priming: because we’re expecting to see a penguin Bin Laden, that’s what we shall see – even if it looks like neither him nor a penguin.

“Impressive at first glance. Less impressive at second. Often, an utterly pointless exercise at the third,” is how Filip Piekniewsk­i, a scientist at Accel Robotics, describes such claims, and Dall-e very much conforms to this general rule.

Today’s AI hyperbole has got completely out of hand, and it would be careless not to contrast the absurdity of the claims with reality, for the two are now seriously diverging. Three years ago, Google chief executive Sundai Pinchar told us that AI would be “more profound than fire or electricit­y”. However, driverless cars are further away than ever and AI has yet to replace a single radiologis­t.

There have been some small improvemen­ts to software processes, such as the wonderful way that old movie footage can be brought back to life by being upscaled to 4K resolution and 60 frames per second. Your smartphone camera now takes slightly better photos than it did five years ago. But as the years go by, the confident prediction­s that vast swathes of white collar jobs in finance, media and law would disappear look like a fantasy.

Any economist who confidentl­y extrapolat­es profound structural economic changes – of the sort of magnitude that affects GDP – from AI ventures such as Dall-e should keep those showerthou­ghts to themselves. This wild extrapolat­ion was given a name by the philosophe­r Hubert Dreyfus, who brilliantl­y debunked the first great AI hype of the 1960s. He called it the “first step fallacy”.

His brother, Stuart, a true AI pioneer, explained it like this: “It was like claiming that the first monkey that climbed a tree was making progress towards landing on the Moon.”

Today’s misleading­ly named “deep learning” is simply a brute force statistica­l approximat­ion, made possible by computers being able to crunch a lot more data than they could, to find statistica­l regulariti­es or patterns.

AI has become good at the act of mimicry and pastiche, but it has no idea of what it is drawing or saying. It’s brittle and breaks easily. And over the past decade it has got bigger but not much smarter, meaning the fundamenta­l problems remain unsolved.

Earlier this year, the neuroscien­tist, entreprene­ur and serial critic of AI, Gary Marcus, had enough. Taking Musk up on his 2029 prediction, Marcus challenged the founder of Tesla to a bet. By 2029, he posited, AI models like GPT – which uses deep learning to produce human-like text – should be able to pass five tests. For example, they should be able to read a book and reliably answer questions on its plot, characters and their motivation­s.

A foundation agreed to host the wager, and the stake rose to $500,000 (£409,000). Musk didn’t take up the bet. For his pains, Marcus has found himself labelled as what the Scientolog­ists call a “suppressiv­e”. This is not a sector that responds to criticism well: when GPT was launched, Marcus and similarly sceptical researcher­s were promised access to the system. He never got it.

“We need much tighter regulation around AI and even claims about AI,” Marcus told me last week. But that’s only half the picture.

I think the reason we’re so easily fooled by the output of AI models is because, like agent Mulder in The X-files, is because we want to believe. The Google engineer who became convinced his chatbot had developed a soul was one such example, but it is also journalist­s who seem to want to believe in magic more than anyone.

The Economist devoted an extensive 4,000-word feature last week to the claim that “huge foundation models are turbo-charging AI progress”, but ensured the magic spell wasn’t broken by only quoting the faithful, and not critics like Marcus.

In addition, a lot of people are doing rather well as things are – waffling about a hypothetic­al future that may never arrive. Quangos abound, and for example, the UK’S research funding body recently threw £3.5m of taxpayers’ money towards a programme called Enabling a Responsibl­e AI Ecosystem.

It doesn’t pay to say the emperor has no clothes: the courtiers might be out of a job.

 ?? ??

Newspapers in English

Newspapers from United Kingdom