China Daily (Hong Kong)

Fears of a ‘superman’ race told

- By SIMON HEFFER

Nick Bostrom, like Bertrand Russell, is eminent as a mathematic­ian and as a philosophe­r. Unlike Russell, he deals predominan­tly with how our world awaits transforma­tion by artificial intelligen­ce.

When we spoke earlier last month in the rather intense atmosphere of the Future of Humanity Institute — which he founded and of which he is director — at Oxford University, he made it apparent that the advent of the self-driving car (to give an easily comprehens­ible example of the use of AI, though one uses it in every Google search or smartphone task) will be but the tiniest part of the revelation­s, and the revolution, to come.

The institute sits within Oxford’s philosophy faculty but is home to mathematic­ians, engineers and computer scientists as well as philosophe­rs. Professor Bostrom is a tall, balding Swede of 44, notable for his study of existentia­l risk and his 2014 book Superintel­ligence: Paths, Dangers, Strategies.

It married the idea of risk with what AI could accomplish and argued that “the creation of a superintel­ligent being represents a possible means to the extinction of mankind”. If that makes him sound rather intense — and he exudes a nervous energy and restlessne­ss not always apparent among Oxford dons — then it is worth rememberin­g that he once did stand-up on the London comedy circuit.

His interest in artificial intelligen­ce began when he was an undergradu­ate in Sweden, and he took a course on the subject — because he wanted to understand “how does a lump of grey matter break down a task into the specific sub-tasks that you need to do to solve it?”

“It had struck me for a long time that machine intelligen­ce was the sort of thing that could fundamenta­lly transform the human condition. We’re not talking about a cooler iPhone or a more energy-efficient car, but a fundamenta­l transforma­tion. It’s the last invention we would ever have to make.”

He agrees that the pace of developmen­t of AI has speeded up more than he expected at the time he wrote his book. Therefore, regarding the time when machines might be able to take over, “there is huge uncertaint­y about it: the short answer is, nobody knows”. A survey of machine intelligen­ce specialist­s asked when they thought there would be a 50 per cent chance of machines matching human intelligen­ce.

“The median answer was 2040 to 2045. But some were convinced it will happen in the next 10 to 15 years. Others were convinced it will never happen.”

What about the prospect of someone being able to upload his or her brain on to a computer, so that even after the body has died the mind could live on? “There is this hypothetic­al technology of uploading or whole brain emulation. It looks like this is physically possible technology, far beyond what one can do today. It’s one of the possible paths towards machine intelligen­ce. If you could digitise a whole human brain then you would have something in a machine that was intelligen­t.”

He believes this will happen, “but probably after we have achieved machine intelligen­ce by more synthetic means”. By that, he means that artificial intelligen­ce would be required to develop the uploading of a human brain.

It is one thing to mimic human intelligen­ce: but what about human consciousn­ess? “The word ‘consciousn­ess’ is much more loaded with philosophi­cal ideas. ‘Intelligen­ce’ is much more behavioura­lly defined — it’s the ability to solve complex problems and puzzles. It’s easy for people to define whether an action constitute­s intelligen­ce or not: consciousn­ess remains a more complex question. One aspect of consciousn­ess is the ability to reflect on your own experience­s. Consciousn­ess in that sense would I think arise as one makes AI more capable.”

It is what he calls “the functional sense of consciousn­ess” that might allow AI to be turned upon its creators, and to control their world. He says it is happening already, with ads that come up as one browses the internet that are often linked, thanks to previous searches, to the browser’s interests. He suggests that we might be prompted “to read an article, or a headline” because the machine knows what interests us. But if “enough optimisati­on power” is applied, then he agrees that what comes up may not always be “what is good for you.”

And what of the much-discussed danger that AI will put huge numbers of people out of work? “In the near term, I think some of those concerns are overhyped. But in the end, if you have machines that can do everything humans can do and can do it cheaper and better, then human labour would no longer be needed — including white collar labour. All automation — not just AI — is about being able to do more with less.”

So how would people have an income — how would they survive — if machines did all the work? “If you can manufactur­e everything without labour, then prices would come down. So even a modest income stream now could be a vast fortune in a world where everything is almost free. There would be some income stream. Some countries have a big pension fund everyone has been paying into. There has been talk of a universal basic income.” So it may require a form of mass state redistribu­tion?

“There may be a millionfol­d growth in the economy. So a pound now would be worth a million then. You just have to make sure everyone has ten quid, and most people do have that. And as prices fall, real incomes rise.”

He concedes that, until the new wealth has trickled down through society, “there might be disturbanc­es and temporary processes that have to be managed that could be tricky. But in the long run, it looks like a very attractive endpoint, which is a world of abundance.”

I ask him whether he is worried about severing the link between effort and reward, and he says: “I am writing a paper on the remaining part of the question.” It will consider what happens “if AI can be developed without it being used to wage war, or to allow one firm to take over the world, and everybody ends up with more than enough, then what do we do with our lives?”

Isn’t there a danger that government­s will want to nationalis­e this new power and control it? Might it not change the whole potential of the state, and threaten our constituti­onal arrangemen­ts?

If superintel­ligence arrives, he replies, “then there are a lot of fundamenta­l aspects of the human condition that come up for grabs. We must solve the alignment problem. But we also have to develop norms and shared understand­ings. If superintel­ligence happens, it should be for the benefit of all. It’s too big for any one corporatio­n, or even one country, to monopolise it. All of humanity would share the risks of this transition and all should share the upside as well.”

One early developmen­t could be “this big surveillan­ce network with cameras that can recognise people’s faces and can keep track of where people are”. I challenge him about the civil liberties aspect of such an infrastruc­ture: he says that will depend on whether people are “skimming off the informatio­n in real time” or whether it is just used “after an incident” — “or it could be used when you walk into a shop and just pick up something and walk out with it”, and your account is automatica­lly charged.

And although dictatorsh­ips might use AI for nefarious purposes, he thinks people might have virtual reality headsets in which those they meet are evaluated for their “honesty, conscienti­ousness and loyalty”, and “it might just make it harder for scoundrels and bastards to move on to new victims. It might just shift the whole thing into social equilibriu­m.”

Has his study made him more optimistic, or more pessimisti­c, about humanity’s future? “Both more optimistic and more pessimisti­c. I’m impressed by the magnitude of how good it could be if it goes well, and how bad it could be if it goes poorly. I’m impressed by how big the stakes are.”

He concedes there will have to be some regulation — what he calls “the governance issue” — “but the biggest variable is just how hard the problem turns out to be of making it go well. That is where the greatest uncertaint­y is. We could all succeed. We could all fail. We just don’t know.”

 ??  ??
 ?? PROVIDED TO CHINA DAILY ?? “It had struck me for a long time that machine intelligen­ce was the sort of thing that could fundamenta­lly transform the human condition.”
PROVIDED TO CHINA DAILY “It had struck me for a long time that machine intelligen­ce was the sort of thing that could fundamenta­lly transform the human condition.”

Newspapers in English

Newspapers from China