Books What are the odds?
At the beginning of lockdown, seeking reassurance that the coronavirus, while clearly sub-optimal, was not necessarily an existential risk to humanity, I tuned into Azeem Azhar’s technology podcast Exponential View, for his interview with Australian philosopher Toby Ord. A senior research fellow at the University of Oxford’s Future of Humanity Institute, Ord had just released his new book, The Precipice: Existential Risk and the Future of Humanity, into a pandemic for which it might have been tailor-made. He explained that historically, over any given century, the risk of humanity’s existential collapse was about 1 in 10,000. In the 21st century, this probability has risen to one in six.
These figures failed to provide the consolation I had hoped for. Over the weeks that followed, I could think of little else, introducing these Russian-roulette numbers (unwelcomely) into most of my conversations. I felt I was now regarding my children over a great divide: when they excitedly shared their future plans – I’m going to be a farmer! I’m going to be a classicist! – it was an effort to smile encouragingly. “That statistic is actually quite hopeful,” my mother remarked, batting away Chicken Little over Zoom, but the fact that humanity’s pending extinction was exponentially more likely than, say, my own premature demise in a car accident struck me as anything but. Finally, in an attempt to stop obsessing about the book, I ordered it. For some time it remained on the coffee table, and I avoided it like kryptonite until my 11-year-old son offered to read and summarise it for me, and I was shamed into opening its covers.
As it transpired, it was a better experience to read the book than to dread the book. “Don’t despair,” Ord counsels the reader, reminding us that as these problems are largely of our own making, so too are the solutions. “If we hold our heads high, we can succeed.” The only stumbling block, of course, is our nature. Rarely has the contrast between our best and worst selves been writ so large; never have the stakes been so high. What a piece of work are we. As Don Watson remarked in these pages in May, “It’s as if some freak thing has possessed our heads and made our world a stage to dramatise the great conundrum.”
According to Ord’s definition, existential catastrophe can mean either human extinction or failed continuation, which can in turn take the form of unrecoverable civilisational collapse or unrecoverable dystopia (choose your own adventure). Up until the 20th century, all existential risks for humanity came from the natural world: principally asteroids and comets, super-volcanic eruptions and stellar explosions. Now the meteorite we should fear most comes from ourselves. Ord lists our biggest anthropogenic risks as nuclear war, climate change and environmental damage, engineered pandemics and unaligned artificial intelligence (that is, AI unaligned with human objectives). Our current strategies for managing these possibilities offer little cause for cheer. The Biological Weapons Convention – the international body charged with prohibiting bioweapons – has just four employees, and a smaller annual operating budget than an average Mcdonald’s restaurant. Ord observes that as a species we spend more on ice-cream per year than we do on the mitigation of existential risk.
My own despair has recently circled around climate change, which Ord acknowledges as a serious existential threat, not least because of the runaway greenhouse effect. But he makes a compelling case to worry even more about unaligned artificial intelligence. He cites a 2016 survey of 300 leading AI researchers, in which half of the respondents predicted that the probability of the long-term impact of AI being “extremely bad (e.g. human extinction)” was at least 5 per cent. It seems extraordinary that these same researchers should proceed blithely with their research, regardless, but the historical precedents are clear. Ord observes that during the development of nuclear weapons “the scientists and military appear to have assumed full responsibility for an act that threatened all life on Earth. Was this a responsibility that was theirs to assume?”
But whose responsibility is technology’s governance? This, of course, is one of the problems: who, exactly, is running the show? Ord says he finds “it useful to consider our predicament from humanity’s point of view: casting humanity as a coherent agent, and considering the strategic choices it would make were it sufficiently rational and wise”. He likens humanity’s current phase to adolescence, in which maturity and wisdom have not yet caught up with our newfound powers. If we manage to dodge the perils of the next few centuries, almost unlimited potential lies ahead of us. Ord is no Luddite and appreciates that we need technology in order to achieve this: without it, we will eventually succumb to that asteroid, or similar. But even as he champions science, he acknowledges its limitations. Existential risk, in particular, is a type of problem that does not graft easily onto the scientific process. In the words of Carl Sagan, “Theories that involve the end of the world are not amenable to experimental verification – or at least, not more than once.”
Over the pages of the book, Ord presents a number of dystopian possibilities, including an unaligned AI system taking over the internet, hoovering up the world’s information to augment its own intelligence, and blackmailing world leaders to use weapons of mass destruction in order to satisfy its own reward goals. Clearly, AI needs to be aligned with “human values”, but how can
this be guaranteed when its own evolution is so unpredictable? And is there any real consensus about what “human values” are?
According to Ord, the average expert estimation in that same 2016 survey was of a 50 per cent likelihood that AI systems would be “able to accomplish every task better and more cheaply than human workers” by 2061, and a 10 per cent chance that they would by 2025. Human redundancy is coming faster than we realise; surely our government should take this into account as it attempts its ham-fisted social engineering. It seems clear that the future will demand fewer vocational skills; what it demands urgently is the getting of wisdom. And it demands a particular type of wisdom, encapsulating multiple strands of human knowledge, as demonstrated by Ord: “Understanding the risks requires delving into physics, biology, earth science and computer science; situating this in the larger story of humanity requires history and anthropology; discerning just how much is at stake requires moral philosophy and economics; and finding solutions requires international relations and political science.” The book’s 132 pages of endnotes speak eloquently to this cross-disciplinary dialogue, critical to the governance of technology. And yet our federal government has decided that this, of all times, is the moment to penalise the humanities. Why would we need critical thinking?
In this era of runaway individualism – of the fiercely defended right not to wear a mask – our only hope lies in a massive recalibration of how we see ourselves. Somehow, we need to reconceive of ourselves as Ord’s “coherent agent”: as a humanity that transcends the concerns of personhood or nationhood or even the fascinations of identity politics. The pandemic has provided a foretaste of the crises to come, and it has been instructive to witness the difference between a leadership that fosters cooperation (Ardern) and a leadership that promotes divisiveness (Trump). Among other things, the train wreck of contemporary America speaks of the profound imaginative failure at the heart of individualism.
But our current situation calls for a more expansive version of humanity again. Ord was an instigator of the effective altruism movement, and came to the study of existential risk from a background in global health and poverty, when he realised “the people of the future may be even more powerless to protect themselves from the risk we imposed than the dispossessed of our own time”. It is something of a philosophical conundrum: the amount of value to attribute to beings who do not yet exist. As I read the book, I was struck by the vast debt we owe our ancestors, and – as Ord puts it – our obligation to pay it forward. It is a notion of humanity that can feel almost spiritual: the “coherent agent” is intergenerational as well as international; our actions must be governed by long-termism as well as collectivism.
You might think that the value of protecting humanity was a given, but Ord approaches this assumption with the same forensic rigour he brings to any other. Alongside the obligation to safeguard the “stunning inheritance” of our past and our future potential, we have the particular responsibility of being the only complex life yet discovered in the universe. It sometimes seems like confirmation bias: to think that we, of all things, should be the most complex and intelligent things we have ever observed. It seems equally unlikely that this current batch of humans – you and I, converged for a moment on this page of The Monthly – could be the ones who could preside over the end of all this, snuffing out consciousness in the universe forever. But the mind reels from all sorts of possibilities in this book. Its modus operandi is the quasi-mystical art of probability, another of our spectacular inventions. And it is the fate of our spectacular inventions that grieves me as much as anything. If we vanish, the laws of physics will still pertain – stark, elegant, beautiful – but who will be around to crack their cryptic codes? All those portals to story, to fantasy, to music, to dreamscape: slammed shut forever. Nature will prevail; it is our fictions I fear for.
For the most part, Ord writes with a relentless, almost affectless rationality, as if the book itself were generated by AI. Nobody could accuse him of hysteria, as he sanguinely totes up the probabilities of our premature demise. In fact, his level tone is a source of reassurance: in these overwrought times, it is a relief that some humans, at least, are capable of looking objectively at the stakes. But while his narrative is rooted in scientific rigour, its futurism also lends it a heady, speculative aura. In the final chapter, Ord’s scope becomes cinematic: he pans outward, situating humanity’s future in deep space and time, and painting our future as limitless. It is thrilling stuff, and clearly designed to inspire (rather than induce learned helplessness, an occupational hazard of the study of existential risk).
Ord’s message is clear: “Safeguarding humanity through these dangers should be a central priority of our time.” And it certainly makes one ponder one’s use of time (all those hours practising the piano). But, like Peter Singer, Ord is also a pragmatist, offering global suggestions – such as enhancing the World Health Organization’s capacity to respond to pandemics, and introducing prohibitions against unnecessary extinction risk in international law – as well as suggestions for the individual. The latter include making strategic career choices, donating to relevant charities, and instigating conversations with one’s friends and children (maybe reviewing his book in a magazine?). “Don’t be fanatical,” he advises. “Boring others with endless talk about this cause is counterproductive.”
Still, you should read this book. It may be bracing, but it does offer some hope. And that hope rises incrementally with everybody who opens its covers and continues the conversation.
As a species, we spend more on ice-cream per year than we do on the mitigation of existential risk.
M