The Week

Apocalypse soon: the end of the world as we know it?

Artificial intelligen­ce is already taking over our jobs. Will it free us, enslave us – or exterminat­e us entirely? The world’s leading expert, Berkeley’s Professor Stuart Russell, offers Danny Fortson a guided tour of the future

-

Stuart Russell has a rule. “I won’t do an interview until you agree not to put a Terminator on it,” says the renowned British computer scientist, sitting in a spare room at his home in Berkeley, California. “The media is very fond of putting a Terminator on anything to do with artificial intelligen­ce.” The request is a tad ironic. Russell, after all, was the man behind Slaughterb­ots, a dystopian short film he released in 2017. It depicts swarms of autonomous mini-drones – small enough to fit in the palm of your hand and armed with a lethal explosive charge – hunting down protesters, congressme­n, anyone really, and exploding in their faces. It wasn’t exactly

Arnold Schwarzene­gger – but he would have been proud.

Autonomous weapons are, Russell says breezily, “much more dangerous than nuclear weapons”. And they are possible today. The Swiss defence department built its very own “slaughterb­ot” after it saw the film, Russell says, just to see if it could. “The fact that you can launch them by the million, that’s a real problem, because it’s a weapon of mass destructio­n. I think most humans would agree we shouldn’t make machines that can decide to kill people.” The 57-year-old from Portsmouth does this a lot: deliver an alarming warning about the existentia­l threat posed by artificial intelligen­ce (AI), but through a placid smile. “We have to face the fact that we are planning to make entities that are far more powerful than humans,” he says. “How do we ensure that they never have power over us?”

There is no shortage of AI doom-mongers. Elon Musk claims we are “summoning the demon”. Stephen Hawking warned that AI could “spell the end of the human race”. Seemingly every month, a new report predicts mass unemployme­nt as machines replace humans. The bad news? Russell essentiall­y agrees. This is disconcert­ing because he quite literally wrote the book on AI. His textbook, Artificial

Intelligen­ce: A Modern Approach, is the most widely used in the industry. Since he authored it in 1994 with Peter Norvig, Google’s director of research, it has been used to train students in more than 1,000 universiti­es. Now, the University of California, Berkeley professor is penning a new edition where he admits that they “got it all wrong”. He adds: “We’re sort of in a bus and the bus is going fast, and no one has any plans to stop.” Where’s the bus going? “Off the cliff.” The good news, though, is that we can turn the bus around. All it entails is a fundamenta­l overhaul, not only of how this frightenin­gly powerful technology is engineered, but also of how the world’s nearly eight billion people organise, value and educate themselves.

From Russell’s vantage point, we have come to a crossroads. In one direction lies “a golden age” where we are freed from drudgery by machines. The other direction is, well, darker. In his new book, called

Human Compatible,

Russell sums it up with what he calls “the gorilla problem”. Apes, our genetic progenitor­s, were eventually superseded. And now? “Their species has essentiall­y no future beyond that which we deign to allow,” he says. “We do not want to be in a similar situation vis-à-vis super-intelligen­t machines.” Quite.

Russell came to California in the 1980s to get a PhD after Oxford, and never left. He is an insider but with an outsider’s perspectiv­e. Talk to most computer scientists and they scoff at the idea that has him so worried: artificial general intelligen­ce, or AGI. It’s an important distinctio­n. Most of today’s AI involves “machine learning”. These are algorithms that crunch through vast volumes of data, draw out patterns, then use those patterns to make prediction­s. Today, reductions in the cost of data storage coupled with leaps in processing capability mean that algorithms finally have enough horsepower and raw data to train on. The result is a blossoming of competent tools that can also be wildly powerful. They are, however, usually

designed for very limited tasks.

“Autonomous weapons are much more dangerous than nuclear weapons. We shouldn’t make machines that can decide to kill people”

Take, for example, a contest organised by US universiti­es last year between 20 lawyers and an AI designed to read contracts. The goal was to see who was better at picking out loopholes. It was not a great day for Homo sapiens. The AI was not only more accurate – it found 94% of the offending passages, to the humans’ 85% – but also faster. The lawyers averaged 92 minutes to finish the task; the AI took 26 seconds. Ask that algorithm to do literally anything else, however, and it is utterly powerless. Such “tool AI”, Russell says, “couldn’t plan its way out of a paper bag”. This is why the industry, at least outwardly, is rather blasé about the threat, or even the possibilit­y, of general intelligen­ce.

There are still many breakthrou­ghs, Russell admits, that are needed to take AI beyond narrow jobs to create truly superintel­ligent machines that can handle any task you throw at them. Scott Phoenix, founder of the Silicon Valley AI start-up Vicarious, explains what it might look like when (if?) it arrives: “Imagine a person who has a photograph­ic memory and has read every

document that any human has ever written. They can think for 60,000 years for every second that passes. If you have a brain like that, questions that were previously out of our reach – about the nature of the universe, how to build a fusion reactor, how to build a teleporter – are suddenly in reach.”

Fantastica­l, you might think. But the same was once said of nuclear fission, Russell points out.

The day after Lord Rutherford dismissed it as “moonshine” in 1933, another physicist, Leo Szilard, worked out how to do it. Twelve years later, Hiroshima and Nagasaki were levelled by atom bombs. So, how long do we have before the age of superintel­ligent machines? Russell reckons they will arrive “in my kid’s lifetime” – but, as he admits, he may be wrong. Trying to predict technologi­cal leaps is a mug’s game. And neither will it be a “big bang” event, where one day we wake up and Hal 9000 is running the world. Rather, the rise of the machines will happen gradually, through a steady drumbeat of advances.

Which is why we must start working – now – not just on how we overhaul AI, but society itself. We’ll cover the former first. The way algorithms work today is simple. Specify a clear, limited objective, and the machine figures out the optimal way to achieve it. It turns out this is a very bad way to build AI. Consider social media. The content-selection algorithms at Facebook, Twitter and the rest populate your feed with posts they think you’ll find interestin­g, but their ultimate goal is something else entirely: revenue maximisati­on. The best way to do that is to get you to click on advertisem­ents, and the best way to do that is to disproport­ionately promote incendiary content that runs alongside them. “These simple machine-learning algorithms are super-powerful because they interact with you for hours a day, and they can manipulate your mind, your preference­s, so that you are a different person,” Russell says. And it has worked a treat. As well as stuffing pockets in Silicon Valley, those algorithms have also helped fuel “the resurgence of fascism, the dissolutio­n of the social contract that underpins democracie­s around the world, and, potentiall­y, the end of the EU and Nato. Not bad for a few lines of code.”

“We are planning to make entities that are far more powerful than humans. But how do we ensure they never have power over us?”

Our inability to see around every corner is what is wrong with AI today, Russell argues, but it’s not a new problem. He points to the myth of King Midas, who got just what he wanted, that everything he touched turn to gold – but this included his wine, his food, his family. With AI, it is no different. No matter how hard we try to define objectives, there are always “unknown unknowns”. Imagine, for example, that the era of general AI has arrived, and we ask it to do the heretofore impossible: to cure cancer. Huzzah! You might think this marks the start of a golden age. Not so fast, warns Russell. “Within hours, the AI system has hypothesis­ed millions of untested chemical compounds,” Russell writes. “Within weeks, it has induced tumours of different kinds in every living human so as to carry out medical trials of these compounds, this being the fastest way to find a cure. Oops.”

How about we ask it to reverse the acidificat­ion of the oceans? Also not a top result. “The machine develops a new catalyst that facilitate­s an incredibly rapid chemical reaction between ocean and atmosphere and restores the oceans’ pH levels. Unfortunat­ely, a quarter of the oxygen in the atmosphere is used up in the process, leaving us to asphyxiate slowly and painfully. Oops.”

You get the idea. But fear not. Russell has come up with another approach. Instead of giving limited, specific objectives, the starting point would be more vague: “Simply define the goal as ‘Be helpful to humans’,” he says. The path to doing so is, obviously, less clear, so the AI would be required to suss out how to do it by constantly asking questions and observing our behaviour. That subtle shift, Russell says, would mean there will be no such thing as killer AI because its reason for being would be just to serve us.

It sounds implausibl­e, I argue. If AI is so vastly superior to us, can we really expect it to continue happily to work for us? Remember, we’re the gorillas in this scenario. Russell demurs. Assuming that machines will act as we have towards lesser species is, apparently, a leap only a tiny human mind would make. “We have absolutely no idea what consciousn­ess is, or how it functions,” Russell says. “And no one is doing any research on how to make conscious machines, at least none that makes any sense to me. If you gave me a trillion dollars to build a conscious machine, I’d just give it back because I’d have absolutely no idea where to start.”

Of all the scary things that AI heralds, an end to work as we know it is perhaps the most popular concern. Most agree that smart machines are quickening their bloodless march across not just blue-collar jobs, but white-collar areas such as transport, law and medicine too. Accountant­s PwC predicted recently that nearly a third of British jobs could be automated away within 15 years. Russell reckons that may be underselli­ng it. “If you just continue the status quo, the likely result is that most people would have no economic function,” he says. “We have to engineer a vision of a desirable future where machines are doing most of the work that we currently call work.” Silicon Valley is obsessed with the idea, known as universal basic income (UBI), of a monthly stipend to every person over 18, to help cover the basics of life in a world without work. Elon Musk goes further, of course. His vision is to implant chips into our skulls, jacking us directly into the matrix to “achieve a sort of symbiosis with AI”.

So, what is Russell’s plan to save the human race? He holds up his iPhone. “This represents $1trn of research and developmen­t,” he says. “How much have we put into how to make people happy? The fraction going to understand­ing the mind, into what makes a fulfilling life, which is after all what we really want, has been very small.” He has a point. In a world where work as we know it goes away, where creations far superior to us do all of life’s heavy lifting, what does one do? From where does one derive self-worth? Satisfacti­on? Money? Russell is calling for a new discipline: happiness engineerin­g. “We have to learn to be better humans,” he says. “People aren’t going to have the wherewitha­l to really have a high-value occupation if we don’t do that research, and if we don’t then create the education systems around it – the training, the profession­s, the credential­s. If we started now, it would take decades, and we aren’t starting. So…” He trails off.

Just before we meet, Russell had been on a call, corralling a group of economists, researcher­s and science-fiction writers. Their goal was to come up with better ideas of how to cope with the world that is barrelling toward us. “Economists are pretty pessimisti­c, but economics is not really a synthetic discipline, in the sense that it doesn’t invent new economies on a regular basis. Whereas sci-fi writers, that’s kind of what they do,” he says. “I’m hoping that by putting them together, the economists can bring realism and the writers can imagine ways things could be different.”

A longer version of this article appeared in The Sunday Times Magazine. © The Sunday Times/News Licensing.

 ??  ?? Artificial general intelligen­ce: can it crack the nature of the universe?
Artificial general intelligen­ce: can it crack the nature of the universe?
 ??  ?? Russell: tell AI “Be helpful to humans”
Russell: tell AI “Be helpful to humans”

Newspapers in English

Newspapers from United Kingdom