Beyond AlphaGo, what’s next for AI?
“Yes, now there is a God.” That was the supercomputer’s answer when asked if there was a God in`Answer,’ Fredric Brown’s 1954 short story about AI, before striking down a fearful man with a bolt of lightning.
AI is a very real threat, according to some of the world’s sharpest minds. In his book Superintelligence: Paths, Dangers, Strategies, Nick Bostrom, a philosopher at Oxford University, warns that true AI might lead to humanity’s extinction.
Leading figures like Elon Musk and Sam Altman concur. Last year, the duo were among many other Silicon Valley investors 42 HWM | M AY 2 0 1 6 who created OpenAI, a billion-dollar non-profit organization dedicated to open-sourcing AI, and ensuring that we don’t end up birthing a monster that swallows us whole.
The end of the path
Musk has called AI our “biggest existential threat”, even tweeting that it could potentially be “more dangerous than nukes”. If that sounds alarmist to you, consider the fact that Stephen Hawking, a brilliant theoretical physicist, once told the BBC, “The development of full AI could spell the end of the human race.” These people are hardly Luddites resisting technological change. Instead, they represent some of the brightest minds today that are actively committed to advancement in AI, but also fear that the world is not doing enough to contain the potential risk.
Bostrom, Musk and Hawking are all referring to a speculative event known as the technological singularity. The singularity posits a future where AI is able to improve itself without human help, and rapidly exceeds human intellectual capacity by orders of magnitude, so much so that its superintelligence exceeds our ability to even understand it.
AlphaGo gave us a taste of this mysterious superintelligence in its game with Lee So-dol. During their second Go match, AlphaGo made a move that made no sense to Lee, or any of the human experts watching the game. Move 37 was so unprecedented that Lee had to leave the match room for 15 minutes to think of a response. And AlphaGo won.
The worry then is that an all-powerful intellect would be outside our wit, and thus our control, entirely. After all, history isn’t exactly rife with examples of more powerful beings being subservient to weaker ones. Or humanity could succumb to a flawed AI that is so intelligent, but constricted by its faulty programming that it does what we ask it to, without considering what it is that we really want.
Consider Nick Bostrom’s ‘paperclip maximizer,’ a thought experiment that asks you to imagine a superintelligent AI whose goal is to maximize the number of paperclips in its collection. If taken to its logical conclusion, this entity might just convert most of the matter in the universe – which includes us – into paperclips.