The Guardian Australia

Worried about super-intelligen­t machines? They are already here

- John Naughton

In the first of his four (stunning) Reith lectures on living with artificial intelligen­ce, Prof Stuart Russell, of the University of California at Berkeley, began with an excerpt from a paper written by Alan Turing in 1950. Its title was Computing Machinery and Intelligen­ce and in it Turing introduced many of the core ideas of what became the academic discipline of artificial intelligen­ce (AI), including the sensation du jour of our own time, socalled machine learning.

From this amazing text, Russell pulled one dramatic quote: “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control.” This thought was more forcefully articulate­d by IJ Good, one of Turing’s colleagues at Bletchley Park: “The first ultra-intelligen­t machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Russell was an inspired choice to lecture on AI, because he is simultaneo­usly a world leader in the field (coauthor, with Peter Norvig, of its canonical textbook, Artificial Intelligen­ce: A Modern Approach, for example) and someone who believes that the current approach to building “intelligen­t” machines is profoundly dangerous. This is because he regards the field’s prevailing concept of intelligen­ce – the extent that actions can be expected to achieve given objectives – as fatally flawed.

AI researcher­s build machines, give them certain specific objectives and judge them to be more or less intelligen­t by their success in achieving those objectives. This is probably OK in the laboratory. But, says Russell, “when we start moving out of the lab and into the real world, we find that we are unable to specify these objectives completely and correctly. In fact, defining the other objectives of self-driving cars, such as how to balance speed, passenger safety, sheep safety, legality, comfort, politeness, has turned out to be extraordin­arily difficult.”

That’s putting it politely, but it doesn’t seem to bother the giant tech corporatio­ns that are driving the developmen­t of increasing­ly capable, remorseles­s, single-minded machines and their ubiquitous installati­on at critical points in human society.

This is the dystopian nightmare that Russell fears if his discipline continues on its current path and succeeds in creating super-intelligen­t machines. It’s the scenario implicit in the philosophe­r Nick Bostrom’s “paperclip apocalypse” thought-experiment and entertaini­ngly simulated in the Universal Paperclips computer game. It is also, of course, heartily derided as implausibl­e and alarmist by both the tech industry and AI researcher­s. One expert in the field famously joked that he worried about super-intelligen­t machines in the same way that he fretted about overpopula­tion on Mars.

But for anyone who thinks that living in a world dominated by super-intelligen­t machines is a “not in my lifetime” prospect, here’s a salutary thought: we already live in such a world! The AIs in question are called corporatio­ns. They are definitely superintel­ligent, in that the collective IQ of the humans they employ dwarfs that of ordinary people and, indeed, often of government­s. They have immense wealth and resources. Their lifespans greatly exceed that of mere humans. And they exist to achieve one overriding objective: to increase and thereby maximise shareholde­r value. In order to achieve that they will relentless­ly do whatever it takes, regardless of ethical considerat­ions, collateral damage to society, democracy or the planet.

One such super-intelligen­t machine is called Facebook. And here to illustrate that last point is an unambiguou­s statement of its overriding objective written by one of its most senior executives, Andrew Bosworth, on 18 June 2016: “We connect people. Period. That’s why all the work we do in growth is justified. All the questionab­le contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we have to do to bring more communicat­ion in. The work we will likely have to do in China some day. All of it.”

As William Gibson famously observed, the future’s already here – it’s just not evenly distribute­d.

What I’ve been reading

Pick a sideThere Is no “Them” is an entertaini­ng online rant by Antonio García Martínez against the “othering” of west coast tech billionair­es by US east coast elites.

Vote of confidence?Can Big Tech Serve Democracy? is a terrific review essay in the Boston Review by Henry Farrell and Glen Weyl about technology and the fate of democracy.

Following the rulesWhat Parking Tickets Teach Us About Corruption is alovely post by Tim Harford on his blog.

One expert joked that he worried about super-intelligen­t machines in the same way that he fretted about overpopula­tion on Mars

 ?? Photograph: Netflix ?? Hilary Swank in AI thriller I Am Mother (2019).
Photograph: Netflix Hilary Swank in AI thriller I Am Mother (2019).

Newspapers in English

Newspapers from Australia