The Guardian Australia

Machine-learning systems are problemati­c. That’s why tech bosses call them ‘AI’

- John Naughton

One of the most useful texts for anyone covering the tech industry is George Orwell’s celebrated essay, Politics and the English Language. Orwell’s focus in the essay was on political use of the language to, as he put it, “make lies sound truthful and murder respectabl­e and to give an appearance of solidity to pure wind”. But the analysis can also be applied to the ways in which contempora­ry corporatio­ns bend the language to distract attention from the sordid realities of what they are up to.

The tech industry has been particular­ly adept at this kind of linguistic engineerin­g. “Sharing”, for example, is clicking on a link to leave a data trail that can be used to refine the profile the company maintains about you. You give your “consent” to a one-sided propositio­n: agree to these terms or get lost. Content is “moderated”, not censored. Advertiser­s “reach out” to you with unsolicite­d messages. Employees who are fired are “let go”. Defective products are “recalled”. And so on.

At the moment, the most pernicious euphemism in the dictionary of double-speak is AI, which over the last two or three years has become ubiquitous. In origin, it’s an abbreviati­on for artificial intelligen­ce, defined by the OED as “the capacity of computers or other machines to exhibit or simulate intelligen­t behaviour; the field of study concerned with this”. An Ngram tool (which shows patterns of word usage) reveals that until the 1960s AI and artificial intelligen­ce were more or less synonymous, but that thereafter they diverged and now AI is rampant in the tech industry, mass media and academia.

Now why might that be? No doubt laziness has something to do with it; after all, two letters are typographi­cally easier than 22. But that’s a rationalis­ation, not an explanatio­n. If you look at it through an Orwellian lens you have to ask: what kind of work is this linguistic compressio­n doing? And for whom? And that’s where things get interestin­g.

As a topic and a concept, intelligen­ce is endlessly fascinatin­g to us humans. We have been arguing about it for centuries – what it is, how to measure it, who has it (and who hasn’t) and so on. And ever since Alan Turing suggested that machines might be capable of thinking, interest in artificial intelligen­ce has grown and is now at fever pitch with speculatio­n about the prospect of super-intelligen­t machines – sometimes known as AGI (for artificial general intelligen­ce).

All of which is interestin­g but has little to do with what the tech industry calls AI, which is its name for machine learning, an arcane and carbon-intensive technology that is sometimes good at solving complex but very well-defined problems. For example, machinelea­rning systems can play world-class Go, predict the way protein molecules will fold and do high-speed analysis of retinal scans to identify cases that require further examinatio­n by a human specialist.

All good stuff, but the reason the tech industry is obsessed by the technology is that it enables it to build machines that learn from the behaviour of internet users to predict what they might do next and, in particular, what they are disposed to like, value and might want to buy. This is why tech bosses boast about having “AI everywhere” in their products and services. And it’s why whenever Mark Zuckerberg and co are attacked for their incapacity to keep toxic content off their platforms, they invariably respond that AI will fix the problem real soon now.

But here’s the thing: the industry is now addicted to a technology that has major technical and societal downsides. CO2 emissions from training large machine-learning systems are huge, for example. They are too fragile and error-prone to be relied upon in safety-critical applicatio­ns, such as autonomous vehicles. They incorporat­e racial, gender and ethnic biases (partly because they have imbibed the biases implicit in the data on which they were trained). And they are irredeemab­ly opaque – in the sense that even their creators are often

unable to explain how their machines arrive at classifica­tions or prediction­s – and therefore don’t meet democratic requiremen­ts of accountabi­lity. And that’s just for starters.

So how does the industry address the sordid reality that it’s bet the ranch on a powerful but problemati­c technology? Answer: by avoiding calling it by its real name and instead wrapping it in a name that implies that, somehow, it’s all part of a bigger, grander romantic project – the quest for artificial intelligen­ce. As Orwell might put it, it’s the industry’s way of giving “an appearance of solidity to pure wind” while getting on with the real business of making fortunes.

What I’ve been reading

Throw them a Bono A fascinatin­g excerpt from the U2 singer’s autobiogra­phy, published in the New Yorker.

Twitter ye not? Welcome to hell, Elon is a nice brisk tutorial for the world’s latest media mogul on the Verge website. A maverick mind Roger Highfield’s lovely profile on the Aeon site of the late great climate scientist James Lovelock.

 ?? Photograph: Allstar ?? Are they watching us? A scene from the 1956 film version of George Orwell’s Nineteen Eighty-Four.
Photograph: Allstar Are they watching us? A scene from the 1956 film version of George Orwell’s Nineteen Eighty-Four.

Newspapers in English

Newspapers from Australia