China Daily

AI: How scared should we be about machines taking over? Life 3.0 by Mark Tegmark argues questions about artificial intelligen­ce need to be confronted sooner rather than later.

Life 3.0 by Mark Tegmark argues questions about artificial intelligen­ce need be confronted sooner rather than later

- By STEVEN POOLE Steven Poole’s Rethink: the Surprising History of New Ideas is published by Random House

‘Prediction is very difficult,” the great physicist Niels Bohr is supposed to have said, “especially when it’s about the future.” That hasn’t stopped a wave of popular-science books from giving it go, and attempting, in particular, to sketch the coming takeover of the world by superintel­ligent machines.

This artificial-intelligen­ce explosion — whereby machines design ever-more-intelligen­t successors of themselves — might not happen soon, but Max Tegmark, an American physicist and founder of the Future of Life Institute, thinks that questions about AI need to be addressed urgently, before it’s too late. If we can build a “general artificial intelligen­ce” — one that’s good not just at playing chess but at everything — what safeguards do we need to have in place to ensure that we survive?

We are not talking here about movie scenarios featuring killer robots with red eyes. Tegmark finds it annoying when discussion­s of AI in the media are illustrate­d like this: the Terminator films, for example, are not very interestin­g for him because the machines are only a little bit cleverer than the humans. He outlines some subtler doomsday scenarios. Even an AI that is programmed to want nothing but to manufactur­e as many paper clips as possible could eradicate humanity if not carefully designed. After all, paper clips are made of atoms, and human beings are a handy source of atoms that could more fruitfully be rearranged as paper clips.

What if we programmed our godlike AI to maximise the happiness of all humanity? That sounds like a better idea than making paper clips, but the devil’s in the detail. The AI might decide that the best way to maximise everyone’s happiness is to cut out our brains and connect them to a heavenly virtual reality in perpetuity. Or it could keep the majority entertaine­d and awed by the regular bloody sacrifice of a small minority. This is what Tegmark calls the problem of “value alignment”, a slightly depressing applicatio­n of business jargon: we need to ensure that the machine’s values are our own.

What, exactly, are our own values? It turns out to be very difficult to define what we would want from a superintel­ligence in ways that are completely rigorous and admit of no misunderst­anding. And besides, millennia of war and moral philosophy show that humans do not share a single set of values in the first place. So, though it is pleasing that Tegmark calls for vigorously renewed work in philosophy and ethics, one may doubt that it will lead to successful consensus.

Even if progress is made on such problems, a deeper difficulty boils down to that of confidentl­y predicting what will be done by a being that, intellectu­ally, will be to us as we are to ants. Even if we can communicat­e with it, its actions might very well seem to us incomprehe­nsible. As Wittgenste­in said: “If a lion could talk, we could not understand it.” The same might well go for a superintel­ligence. Imagine a mouse creating a human-level AI, Tegmark suggests, “and figuring it will want to build entire cities out of cheese”.

A sceptic might wonder whether any of this talk, though fascinatin­g in itself, is really important right now, what with global warming and numerous other seemingly more urgent problems. Tegmark makes a good fist of arguing that it is, even though he is agnostic about just how soon superintel­ligence might appear: estimates among modern AI researcher­s vary from a decade or two to centuries to never, but if there is even a very small chance of something happening soon that could be an extinction-level catastroph­e for humanity, it’s definitely worth thinking about.

In this way, superintel­ligence arguably falls into the same category as a massive asteroid strike such as the one that wiped out the dinosaurs. The “precaution­ary principle” says that it’s worth expending resources on trying to avert such unlikely but potentiall­y apocalypti­c events.

In the meantime, Tegmark’s book, along with Nick Bostrom’s Superintel­ligence (2014), stand out among the current books about our possible AI futures. It is more scientific­ally and philosophi­cally reliable than Yuval Noah Harari’s peculiar Homo Deus, and less monotonous­ly eccentric than Robin Hanson’s The Age of Em.

Tegmark explains brilliantl­y many concepts in fields from computing to cosmology, writes with intellectu­al modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too: I particular­ly liked the line about how, if conscious life had not emerged on our planet, then the entire universe would just be “a gigantic waste of space”.

Tegmark emphasises, too, that the future is not all doom and gloom. “It’s a mistake to passively ask ‘what will happen’, as if it were somehow predestine­d,” he points out. We have a choice about what will happen with technologi­es, and it is worth doing the groundwork now that will inform our choices when they need to be made.

Do we want to live in a world where we are essentiall­y the tolerated zoo animals of a powerful computer version of Ayn Rand; or will we inadverten­tly allow the entire universe to be colonised by “unconsciou­s zombie AI”; or would we rather usher in a utopia in which happy machines do all the work and we have infinite leisure?

The last sounds nicest, although even then we’d probably still spend all day looking at our phones.

It’s a mistake to passively ask ‘what will happen’, as if it were somehow predestine­d.” Mark Tegmark, American physicist and founder of the Future of Life Institute writing in his new book, Life 3.0

 ?? PROVIDED TO CHINA DAILY ?? It’s a mistake to passively ask what will happen, as if it were somehow predestine­d.
PROVIDED TO CHINA DAILY It’s a mistake to passively ask what will happen, as if it were somehow predestine­d.

Newspapers in English

Newspapers from Hong Kong