The Guardian Australia

Max Tegmark: ‘Machines taking control doesn’t have to be a bad thing’

- Andrew Anthony

Afew years ago the cosmologis­t Max Tegmark found himself weeping outside the Science Museum in South Kensington. He’d just visited an exhibition that represente­d the growth in human knowledge, everything from Charles Babbage’s difference engine to a replica of Apollo 11. What moved him to tears wasn’t the spectacle of these iconic technologi­es but an epiphany they prompted.

“It hit me like a brick,” he recalls, “that every time we understood how something in nature worked, some aspect of ourselves, we made it obsolete. Once we understood how muscles worked we built much better muscles in the form of machines, and maybe when we understand how our brains work we’ll build much better brains and become utterly obsolete.”

Tegmark’s melancholy insight was not some idle hypothesis, but instead an intellectu­al challenge to himself at the dawn of the age of artificial intelligen­ce. What will become of humanity, he was moved to ask, if we manage to create an intelligen­ce that outstrips our own?

Of course, this is a question that has repeatedly occurred in science fiction. However, it takes on different kind of meaning and urgency as AI becomes science fact. And Tegmark decided it was time to examine the issues surroundin­g AI and the possibilit­y, in particular, that it might lead to a so-called superintel­ligence.

With his friend the Skype cofounder Jaan Tallinn, and funding from the tech billionair­e Elon Musk, he set up the Future of Life Institute, which researches the existentia­l risks facing humanity. It’s located in Cambridge, Massachuse­tts, where Tegmark is a professor at MIT, and it’s not unlike the Future of Humanity Institute in Oxford, the body set up by his fellow Swede, the philosophe­r Nick Bostrom.

Tegmark also set about writing a book, which he has just published, entitled Life 3.0: Being Human in an Age of Artificial Intelligen­ce. Having previously written about such abstruse and highly theoretica­l concepts as the multiverse, Tegmark is not a man daunted by the prospect of informed but imaginativ­e speculatio­n.

One of the difficulti­es in getting a clear perspectiv­e on AI is that it mired in myth and misunderst­anding. Tegmark has tried to address this image problem by carefully unpacking the ideas involved or associated with AI – intelligen­ce, memory, learning, consciousn­ess – and then explaining them in demystifyi­ng fashion.

First, though, Tegmark, speaking on the phone from Boston, is eager to make it clear what AI is not about.

“I think Hollywood has got us worrying about the wrong thing,” he says. “This fear of machines turning conscious and evil is a red herring. The real worry with advanced AI is not malevolenc­e but incompeten­ce. If you have superintel­ligent AI, then by definition it’s very good at attaining its goals, but we need to be sure those goals are aligned with ours. I don’t hate ants, but if you put me in charge of building a greenenerg­y hydroelect­ric plant in an anthill area, too bad for the ants. We don’t want to put ourselves in the position of those ants.”

Life 3.0 is very far from a jeremiad against AI. In fact it’s much more a celebratio­n of the potential of superintel­ligence. But what is superintel­ligence? Indeed, what is intelligen­ce? Tegmark defines it as the “ability to accomplish complex goals”. Therefore computers qualify as intelligen­t. However, their intelligen­ce is narrow.

At the moment, computers are able to process informatio­n in specific areas that go far beyond human capacity. For example, the best chess player in the world stands no chance against a modern computer program. But that program would be useless against a child in a game of noughts and crosses. Humans, even the very young, possess a general intelligen­ce across a broad range of abilities, whereas, for all their processing power, computers are confined to prescribed tasks.

So computers are only as intelligen­t as we allow them to be, as we program them to be. But as we move into the AI era, that is beginning to change. There are early examples at places such as Google’s AI subsidiary, DeepMind, of computers selflearni­ng, adapting through trial and error. So far this facility has only been demonstrat­ed in the realm of video games and the board game Go, but presumably that will spread out into other media. And if it spreads enough it’s likely to have a profound effect on how we think about ourselves, about life and many other fundamenta­l issues.

Tegmark sets out to examine these questions by creating a defining context, a grid of developmen­tal stages. He starts out by going back to the most primitive forms of life, such as bacteria, which he calls Life 1.0. This is the simple biological stage in which life is really only about replicatio­n, and adaptation is possible only through evolution.

Life 2.0, or the cultural stage, is where humans are: able to learn, adapt to changing environmen­ts, and intentiona­lly change those environmen­ts. However we can’t yet change our physical selves, our biological inheritanc­e. Tegmark describes this situation as one of hardware and software. We design our own software – our ability to “walk, read, write, calculate, sing and tell jokes” – but our biological hardware (the nature of our brains and bodies) is subject to evolution and necessaril­y restricted.

The third stage, Life 3.0, is

technologi­cal, in which post-humans can redesign not only their software but their hardware too. Life, in this form, Tegmark writes, is “master of its own destiny, finally fully free from its evolutiona­ry shackles”.

This new intelligen­ce would be immortal and able to fan out across the universe. In other words, it would be life, Jim, but not as we know it. But would it be life or something else? It’s fair to say that Tegmark, a physicist by training, is not a biological sentimenta­list. He is a materialis­t who views the world and the universe beyond as being made up of varying arrangemen­ts of particles that enable differing levels of activity. He draws no meaningful or moral distinctio­n between a biological, mortal intelligen­ce and that of an intelligen­t, self-perpetuati­ng machine.

Tegmark describes a future of boundless possibilit­y for Life 3.0, and at times his writing borders on the fantastic, even triumphali­st; but then he is a theorist, attempting to envisage what for most of us is either unimaginab­le or unpalatabl­e.

There is, though, a logic to his projection­s which even his detractors would allow, although they may argue over the timescale. Put simply, we are in the early phase of AI – self-driving cars, smart-home control units and other automata. But if trends continue apace, then it’s not unreasonab­le to assume that at some point – 30 years’ time, 50 years, 200 years? – computers will reach a general intelligen­ce equivalent in many ways to that of humans.

And once computers reach this stage their improvemen­t will increase rapidly because they will bring ever more processing capacity to working out how to increase their processing capacity. This is the argument that Bostrom laid out in his 2014 book Superintel­ligence, and the result of this massive expansion in intelligen­ce – or the ability to accomplish complex goals – is indeed superintel­ligence, a singularit­y that we can only guess at.

Superintel­ligence, however, is not an inevitabil­ity. There are many in the field who believe that computers will never match human intelligen­ce, or that if they do, humans themselves will have learned to adapt their own biology by then. But if it’s a possibilit­y, then it’s one Tegmark believes we urgently need to think seriously about.

“When we’re in a situation where something truly dramatic might happen, within decades, to me that’s a really good time to start preparing so that it becomes a force for good. It would have been nice if we’d prepared more for climate change 30 years ago.”

Like Bostrom, Tegmark argues that developmen­t of AI is an even more pressing concern than climate change. Yet if we’re looking at creating an intelligen­ce that we can’t possibly understand, how much will preparatio­n affect what takes place on the other side of the singularit­y? How can we attempt to confine an intelligen­ce that is beyond our imagining?

Tegmark acknowledg­es that this is a question no one can answer at the moment, but he argues that there are many other tasks that we should prioritise.

“Before we worry about longterm challenges of superintel­ligence, there are some very short-term things we need to address. Let’s not make perfect the enemy of good. Everyone agrees that never under any circumstan­ces do we want airplanes to fly into mountains or buildings. When Andreas Lubitz got depressed, he told his autopilot to go down to 100 metres and the computer said OK! The computer was completely clueless about human goals, even though we have the technology today to build airplanes that whenever the pilot tries to fly into something, it goes into safe mode, locks the cockpit and lands at the nearest airport. This kind of kindergart­en ethic we should start putting in our machines today.”

But before that, there’s even more pressing work to be done, Tegmark says. “How do we transform today’s buggy and hackable computers into robust AI systems that we really trust? This is hugely important. I feel that we as a society have been way too flippant about this. And world government­s should include this as a major part of computer science research.”

Preventing the rise of a superintel­ligence by abandoning research in artificial intelligen­ce is not, he believes, a credible approach. “Every single way that 2017 is better than the stone age is because of technology. And technology is happening. Nobody here is talking about stopping technology. Asking if you’re for or against AI is as ridiculous as asking if you’re for or against fire. We all love fire for keeping our homes warm and we all want to prevent arson.”

Preventing arson, in this case, is a job that’s already upon us. As Tegmark notes, we’re on the cusp of starting an arms race in lethal autonomous weapons. Vladimir Putin said just recently that whoever mastered AI would become the “ruler of the world”. In November there is a UN meeting to look at the viability of an internatio­nal treaty to ban these weapons in much the same way that biological and chemical weapons have been banned. “The AI community support this very strongly,” says Tegmark. In terms of technology, there’s very little difference, he says, between “an autonomous assassinat­ion drone and an Amazon book delivery drone”.

“Another big issue over the next decade is job automation. Many leading economists think that the growing inequality that gave us Brexit and Trump is driven by automation. Here, there’s a huge opportunit­y to make everyone better off if the government can redistribu­te some of this great wealth that machines can produce to benefit everybody.”

In this respect Tegmark believes the UK, with its belief in the free market and history of the NHS and welfare state, could play a leading role in harnessing corporate innovation for national benefit. The problem with that analysis that, aside from the fact that much AI research is led by authoritar­ian regimes in Russia and China, the lion’s share of advances are coming from America or American companies – and as a society the US has not been traditiona­lly over-concerned with issues of inequality.

In the book, Tegmark hails Google’s Larry Page, one of the wealthiest men on Earth, as someone who might turn out to be the most influentia­l human who has ever lived: “My guess is that if super intelligen­t digital life engulfs our universe in my lifetime, it will be because of Larry’s decisions.”

He describes Page as he describes Musk – as thoughtful and sincerely concerned about humanity’s plight. No doubt he is, but as a businessma­n he’s primarily concerned with profit and stealing a march on competitor­s. And as things stand, far too much decision-making power resides in the hands of unrepresen­tative tech billionair­es.

It seems to me that while the immediate issues of AI are essentiall­y technologi­cal or, in the political sense, technical, those waiting along the road are far more philosophi­cal in nature. Tegmark outlines several different outcomes that might prevail, from dystopian totalitari­an dictatorsh­ip to benign machine control.

“It’s important to realise that intelligen­ce equals power,” he says. “The reason we have power over tigers isn’t because we have bigger muscles or sharper teeth. It’s because we’re smarter. A greater power is likely to ultimately control our planet. It could be either that some people get great power thanks to advanced AI and do things you wouldn’t like them to, or it could be that machines themselves outsmart us and manage to take control. That doesn’t have to be a bad thing, necessaril­y. Children don’t mind being in the presence of more intelligen­t beings, named mummy and daddy, because the parents’ goals are in line with theirs. AI could solve all our thorny problems and help humanity flourish like never before.”

But wouldn’t that radically alter humanity’s sense of itself, looking to superior agents to take care of us? We would no longer be the primary force shaping our world.

“That’s right,” he says, with a smile in his voice, “but there are many people in the world today who already believe that’s how it is and feel quite happy about it. Religious people believe there is a being much more powerful and intelligen­t than them who looks out for them. I feel that what we really need to quit is this hubristic idea of building our self-worth on a misplaced idea of human exceptiona­lism. We humans are much better off if we can be humble and say maybe there can be beings much smarter than us, but that’s OK, we get our self-worth from other things: having really profound relationsh­ips with our fellow humans and wonderfull­y inspired experience­s.”

At such moments Tegmark can sound less like a hardcore materialis­t physicist than some trippy new-age professor who’s spent too long contemplat­ing the cosmos. But surely, I say, the modernist project that has built these machines was fuelled by a belief that God was an invention we no longer required – wouldn’t it be a bitter historical irony if we ended up inventing new gods to supplant the old one?

Tegmark laughs. “I think one of the things we will need in the age of AI is a good sense of humour and appreciati­on of irony. We keep gloating about being the smartest on the planet precisely because we’re able to build all this fancy technology which is on track to make us not be the smartest on the planet!”

Having researched and written this book, Tegmark is much more optimistic than he was in that lachrymose moment in South Kensington. But it’s not an optimism built on the assumption that everything will turn out OK in the end. Rather, he believes we must act if we’re to secure a beneficial outcome. People and government­s alike, he says, must turn their attention to the oncoming future, prepare appropriat­e safety engineerin­g, and think deeply about the kind of world we want to create.

So what would he say if he could address that UN meeting in November?

“Fund AI safety research, ban lethal autonomous weapons, and expand social services so that wealth created by AI makes everybody well off.”

As ever, the road ahead will be filled with the unforeseen consequenc­es of today’s action or lack of it, but adopting that three-point plan seems like a firm step in the direction of making the future that much less worrying.

• Life 3.0 by Max Tegmark is published by Allen Lane (£20). To order a copy for £17 go to guardianbo­okshop.com or call 0330 333 6846. Free UK p amp;p over £10, online orders only. Phone orders min p amp;p of £1.99

We're in a situation where something truly dramatic might happen within decades – that’s a good time to start preparing

 ??  ?? ‘We as a society have been way too flippant about this’: Max Tegmark in his lab at MIT. Photograph: The Washington Post/Getty Images
‘We as a society have been way too flippant about this’: Max Tegmark in his lab at MIT. Photograph: The Washington Post/Getty Images
 ??  ?? A customer takes a photograph of Toshida’s humanoid robot Aiko Chihira, who greets customers at the Mitsukoshi department store in Tokyo. Photograph: Bloomberg via Getty Images
A customer takes a photograph of Toshida’s humanoid robot Aiko Chihira, who greets customers at the Mitsukoshi department store in Tokyo. Photograph: Bloomberg via Getty Images

Newspapers in English

Newspapers from Australia