The Guardian (USA)

Misplaced fears of an ‘evil’ ChatGPT obscure the real harm being done

- John Naughton

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversati­on with Bing, Microsoft’s ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructiv­e desires and was in love with him.

The transcript of the conversati­on, together with Roose’s appearance on the paper’s The Daily podcast, immediatel­y ratcheted up the moral panic already raging about the implicatio­ns of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other “generative AI” tools that are now loose in the world. These are variously seen as chronicall­y untrustwor­thy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligen­ce (AGI) – ie human-level intelligen­ce – and therefore posing an existentia­l threat to humanity.

Accompanyi­ng this hysteria is a new gold rush, as venture capitalist­s and other investors strive to get in on the action. It seems that all that money is burning holes in very deep pockets. Mercifully, this has its comical sides. It suggests, for example, that chatbots and LLMs have replaced crypto and web 3.0 as the next big thing, which in turn confirms that the tech industry collective­ly has the attention span of a newt.

The strangest thing of all, though, is that the pandemoniu­m has been sparked by what one of its leading researcher­s called “stochastic parrots” – by which she means that LLMpowered chatbots are machines that continuous­ly predict which word is statistica­lly most likely to follow the previous one. And this is not black magic, but a computatio­nal process that is well understood and has been clearly described by Prof Murray Shanahan and elegantly dissected by the computer scientist Stephen Wolfram.

How can we make sense of all this craziness? A good place to start is to wean people off their incurable desire to interpret machines in anthropoce­ntric ways. Ever since Joe Weizenbaum’s Eliza, humans interactin­g with chatbots seem to want to humanise the computer. This was absurd with Eliza – which was simply running a script written by its creator – so it’s perhaps understand­able that humans now interactin­g with ChatGPT – which can apparently respond intelligen­tly to human input – should fall into the same trap. But it’s still daft.

The persistent rebadging of LLMs as “AI” doesn’t help, either. These machines are certainly artificial, but to regard them as “intelligen­t” seems to me to require a pretty impoverish­ed conception of intelligen­ce. Some observers, though, such as the philosophe­r Benjamin Bratton and the computer scientist Blaise Agüera y Arcas are less dismissive. “It is possible,” they concede, “that these kinds of AI are ‘intelligen­t’ – and even ‘conscious’ in some way – depending on how those terms are defined” but “neither of these terms can be very useful if they are defined in strongly anthropoce­ntric ways”. They argue that we should distinguis­h sentience from intelligen­ce and consciousn­ess and that “the real lesson for philosophy of AI is that reality has outpaced the available language to parse what is already at hand. A more precise vocabulary is essential.”

It is. For the time being, though, we’re stuck with the hysteria. A year is an awfully long time in this industry. Only two years ago, remember, the next big things were going to be crypto/ web 3.0 and quantum computing. The former has collapsed under the weight of its own absurdity, while the latter is, like nuclear fusion, still just over the horizon.

With chatbots and LLMs, the most likely outcome is that they will eventually be viewed as a significan­t augmentati­on of human capabiliti­es (spreadshee­ts on steroids, as one cynical colleague put it). If that does happen, then the main beneficiar­ies (as in all previous gold rushes) will be the providers of the picks and shovels, which in this case are the cloudcompu­ting resources needed by LLM technology and owned by huge corporatio­ns.

Given that, isn’t it interestin­g that the one thing nobody talks about at the moment is the environmen­tal impact of the vast amount of computing needed to train and operate LLMs? A world that is dependent on them might be good for business but it would certainly be bad for the planet. Maybe that’s what Sam Altman, the CEO of OpenAI, the outfit that created ChatGPT, had in mind when he observed that “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies”.

What I’ve been reading

Profiles of painSocial Media Is a Major Cause of the Mental Illness Epidemic in Teen Girls is an impressive survey by the psychologi­st Jonathan Haidt.

Crowd-pleaserWha­t the Poet, Playboy and Prophet of Bubbles Can Still Teach us is a lovely essay by Tim Harford on the madness of crowds, among other things.

Tech royaltyWha­t Mary, Queen of Scots, Can Teach Today’s ComputerSe­curity Geeks is an intriguing post by Rupert Goodwins on the Register.

The most likely outcome is that chatbots will eventually be viewed as a significan­t augmentati­on of human capabiliti­es

 ?? Photograph: NYT/The Star ?? The media has covered ChatGPT with varying degress of sobriety.
Photograph: NYT/The Star The media has covered ChatGPT with varying degress of sobriety.

Newspapers in English

Newspapers from United States