Computers that are smarter than humans? It’s closer than you think
Johnny Depp’s Transcendence imagines a world of hyper intelligent computing that outstrips the human brain’s thinking prowess. Paddy Smith explains why it’s neither far-fetched, nor far away
Switch your mind to ‘open’ and upload this modicum of data: computers could be more intelligent than us within 30 years. They can already beat us at chess (in 1997 IBM’s Deep Blue famously clobbered world champion Garry Kasparov) and outclass us at complex string calculations (shortly after the turn of the century computers helped us unravel our entire DNA sequence). Now they can drive cars and tell you if you’re smiling, yet the most powerful data processing unit we know remains the human brain. But for how long?
Luckily, we’ve had plenty of time to get used to the idea of inventing our way into second place. It’s nearly 200 years since Mary Shelley dreamt up a mad scientist called Victor Frankenstein, whose creation learns to speak and read before demanding a female counterpart be manufactured to keep him company. Since Frankenstein, the hypothetical superiority of synthetic intelligence has played second place only to alien life in the science- ction canon’s preferred plots.
A few modern notables: Hal in Stanley Kubrick’s 2001: A Space Odyssey (1968); Ash in Ridley Scott’s Alien (1979); GERTY in Duncan Jones’ Moon (2009). All these ctional examples have something in common – like Dr Frankenstein’s monster, they turn against their creators: us.
What none of them does is provide the backstory for this possibility and that is where Transcendence, the directorial debut from Wally Pfister starring Johnny Depp, takes up the slack. Depp plays Dr Will Caster, an artificial intelligence (AI) evangelist who gets shot but has his brain uploaded to a computer in an attempt to salvage his expansive knowledge. The scientist then becomes the computer (or the computer he, complete with emotional processing) and becomes megalomaniacal, and so on.
Scared? No, me neither. We’ve all seen it before. But the theory is sound and becoming closer to a practical reality daily. The computers that drive cars and beat you at chess will one day be able to learn better and faster than you. They won’t need to sleep. Instead they will sink their energy into developing ever more powerful data-cruching offspring who, in turn, will build their own superior ‘children’. Unchecked by the glacial mores of evolution or the limitations of mere biological matter, it is unsurprising that sci- prophecy foresees mankind very soon at the mercy of his meisterwerk.
The tipping point – the point that Transcendence imagines – is known to computer scientists as the singularity. It is the single moment when AI overtakes human intelligence. That is to say computers will not only be able to beat us at chess, but they will be able to process a suitably victorious emotional response, too. After all, our own brains are simply a biological switching system with synapses for transistors. Why should a computer not feel reward too?
The singularity theory assumes we can reach – and go beyond – such a point, and that isn’t pie-in-the-sky idea born in the writers’ room at Warner Bros. It is based on established scienti c research, and it could come along sooner than you might think.
Ray Kurzweil is an inventor, author and futurist. He’s also Google’s director of engineering. Oh, and he thinks the singularity could happen in the 2040s, thirty-odd years from now. He sees a future in which computers are a billion times more powerful than the human brain. In case you were in any doubt about whether he would stick by those claims, he’s even written them in a book called The Singularity Is Near. If his station at Google isn’t enough to convince you of his sanity, he was granted America’s biggest tech medal by Bill Clinton in 1999. See the photocopier in your of ce? Yeah, he invented the atbed scanner, too.
That’s not to say Kurzweil isn’t outspoken. But he’s also not alone. And this is where the ctional plot of Transcendence thickens into reality. Last year in New York scientists gathered for the Global Future 2045 conference, a networking event designed to bring together the minds that hope to bring about human immortality by combining our biological brains with the infallibility of modern computing.
The conference was founded by Dmitry Itskov, a Russian entrepreneur who is reputed to have spent US 3 million on the project to date. The plan is to
The computers that drive cars and beat you at chess will one day be able to learn better and faster than you
develop first a robot that can be controlled via a human brain, then a method of transferring a biological brain into such a robot. Once these milestones have been reached, it will be time to work on transferring data from an organic brain to a synthetic one. Lastly, the scientists will attempt to create holographic beings to replace cumbersome physical robots.
Mad as all this sounds, some of the technologies already exist. Prosthetics can already be controlled using nerve triggers – down to the movement of a single nger. Keeping the brain alive by artificial means is an established medical practice. And Tupac Shakur rose from the dead to perform as a hologram alongside Snoop Dog and Dr Dre at Coachella, a Californian music and arts festival, in 2012. So while the idea of holograms wandering about with synthetic human brains (memories and all) might seem like something that belongs in the next century, Itskov is aiming to get to that stage by 2045, just over 30 years from now.
At the Global Future 2045 conference, roboticists can rub shoulders with heavyweight humanitarian foundation leaders and the likes of Ray Kurzweil (he attended last
year’s event). From afar, the Dalai Lama has endorsed Itskov’s endeavour, while closer to home the US government is investing US$100 million in science to “better understand how we think, learn and remember” and backing the Defense Advanced Research Projects Agency (DARPA) with a further US$50 million for “understanding the dynamic functions of the brain.” DARPA has already developed a humanoid robot called Atlas, which is expected to be able to drive a car and operate power tools this year.
Are we all sitting comfortably now?
More concerning still are the numerous unanswered ethical questions, some of which are addressed in Transcendence. Does a human brain that has been transferred to a machine qualify for the same rights as any other person? What if it is simply the data that has been transferred – memories, motor skills, knowledge? How can we protect our species against a creation more intelligent than us? What happens, in short, if things start to go wrong? Or should we say when things go wrong?
Unsurprisingly, Transcendence assumes a cautionary position on the possibilities that await us on the other side of the singularity, as do the bulk of sci- stories in its vein (yes, even Frankenstein). But that’s just because it makes for a better plot. Right?
Well, yes. And no. There is plenty of good-natured artificial intelligence in the sci- genre. In Douglas Adams’ Hitchhikers Guide To The Galaxy, a computer called Deep Thought builds a superior computer called Earth. It is so large that it is often mistaken for a planet and is famously described in the eponymous interplanetary guide as ‘mostly harmless.’ The Culture universe, a sci- environment created by Iain M Banks, is a post-human liberal anarchy where artificial intelligence benignly provides a lifestyle of supernatural abundance for its sentient subjects.
But these are the rare exceptions to the usual assumption that once we have created superior robots, they will turn on us. Sci- godfather Isaac Asimov famously postulated the Three Laws of Robotics to be programmed into the DNA of advanced AI to prevent robots hurting humans or each other, and to ensure they would obey our instructions. But he also managed to envisage a situation in which a robot with slightly modified coding could justify attacking a human. (The short story, entitled Little Lost Robot, formed the basis for the 2004 lm I, Robot.)
Another common scenario in which we find ourselves at war with our artificially intelligent creations sees us
Does a human brain that has been transferred to a machine qualify for the same rights as any other person?
competing for resources in a stark reversal of the future envisaged in Ian M Banks’ Culture universe. But that assumes that technology will be competitive, a trait that is unlikely to cross the artificial mind of something that has not had to endure the gruelling challenges of genetic evolution.
More realistically, artificial intelligence could become the next nuclear warfare with governments or individuals (presumably the sort of evil overlords depicted by Bond lms) misappropriating the technology for their own personal gain. It’s pretty harrowing to think that the world’s most feared weapon of mass destruction might be able to think for itself, even build more of itself.
If all this seems too terrifying to imagine, relax. We have a major defence against our real-life Frankensteins, should they turn against us in our lifetimes. It is the universal saviour of technological rebellion, as any IT helpdesk can already tell you. This safety measure has been installed on almost every electronic device ever made and continues to be an important physical feature of gadgetry, even in our touchscreen-obsessed world. We call it the off switch.
English actor Peter Cushing as Baron Victor Frankenstein in ‘ Frankenstein Must Be Destroyed’, 1969
Colin Cliv plays the driven doctor and Dwight Frye plays his deformed assistant Fritz in Frankenstein, 1931, directed by James Whale Johnny Depp in Transcendence
A Space Odyssey during filming with director Stanley Kubrick lining up shot through camera Yaphet Kotto, Sigourney Weaver and Ian Holm on the set of Ridley Scott’s science fiction classic Alien, 1979
Duncan Jones’ Moon, 2009