The future is coming but not as we think it
If you hear a scenario about the world in 2050 and it sounds like science fiction, it is probably wrong; but if you hear a scenario about the world in 2050 and it does not sound like science fiction, it is certainly wrong.
Technology is never deterministic: it can be used to create very different kinds of society. In the 20th century, trains, electricity and radio were used to fashion Nazi and communist dictatorships, but also to foster liberal democracies and free markets. In the 21st century, AI will open up an even wider spectrum of possibilities. Deciding which of these to realise may well be the most important choice humankind will have to make in the coming decades. This choice is not a matter of engineering or science. It is a matter of politics. Hence it is not something we can leave to Silicon Valley – it should be among the most important items on our political agenda. Unfortunately, AI has so far hardly registered on our political radar.
Max Tegmark’s Life 3.0 tries to rectify the situation. Written in an accessible and engaging style, the book offers a political and philosophical map of the promises and perils of the AI revolution. Instead of pushing any one agenda or prediction, Tegmark seeks to cover as much ground as possible, reviewing a wide variety of scenarios concerning the impact of AI on the job market, warfare and political systems.
Life 3.0 does a good job of clarifying basic terms and key debates, and in dispelling common myths. While science fiction has caused many people to worry about evil robots, for instance, Tegmark rightly emphasises that the real problem is with the unforeseen consequences of developing highly competent AI. In Tegmark’s words, “the real risk with artificial general intelligence isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”
Naturally Tegmark’s map is not complete, and in particular it does not give enough attention to the confluence of AI with biotechnology. The 21st century will be shaped not by infotech alone, but rather by the merger of infotech with biotech. AI will be of crucial importance precisely because it will give us the computing power necessary to hack the human organism. Long before the appearance of superintelligent computers, our society will be completely transformed by rather crude and dumb AI that is nevertheless good enough to hack humans, predict their feelings, make choices on their behalf and manipulate their desires. It might be apocalypse by shopping.
Yet the real problem of Tegmark’s book is that it soon bumps up against the limits of present-day political debates. The AI revolution turns many philosophical problems into practical political questions and forces us to engage in “philosophy with a deadline” (as the philosopher Nick Bostrom called it). Philosophers are patient people, engineers are impatient, and hedge fund investors are more restless still. When Tesla engineers come to design a self-driving car, they cannot wait while philosophers argue about its ethics.
Consequently Tegmark soon leaves behind familiar debates about jobs, privacy and weapons of mass destruction, and ventures into realms that hitherto were associated with philosophy, theology and mythology, taking things beyond our own planet. This can hardly be avoided but I fear that many of his prospective readers will not follow him there. Our political systems, and indeed our individual minds, are just not built to think on such a scale.