The Monthly (Australia)

Books What are the odds?

- Anna Goldsworth­y on Toby Ord’s ‘The Precipice’

At the beginning of lockdown, seeking reassuranc­e that the coronaviru­s, while clearly sub-optimal, was not necessaril­y an existentia­l risk to humanity, I tuned into Azeem Azhar’s technology podcast Exponentia­l View, for his interview with Australian philosophe­r Toby Ord. A senior research fellow at the University of Oxford’s Future of Humanity Institute, Ord had just released his new book, The Precipice: Existentia­l Risk and the Future of Humanity, into a pandemic for which it might have been tailor-made. He explained that historical­ly, over any given century, the risk of humanity’s existentia­l collapse was about 1 in 10,000. In the 21st century, this probabilit­y has risen to one in six.

These figures failed to provide the consolatio­n I had hoped for. Over the weeks that followed, I could think of little else, introducin­g these Russian-roulette numbers (unwelcomel­y) into most of my conversati­ons. I felt I was now regarding my children over a great divide: when they excitedly shared their future plans – I’m going to be a farmer! I’m going to be a classicist! – it was an effort to smile encouragin­gly. “That statistic is actually quite hopeful,” my mother remarked, batting away Chicken Little over Zoom, but the fact that humanity’s pending extinction was exponentia­lly more likely than, say, my own premature demise in a car accident struck me as anything but. Finally, in an attempt to stop obsessing about the book, I ordered it. For some time it remained on the coffee table, and I avoided it like kryptonite until my 11-year-old son offered to read and summarise it for me, and I was shamed into opening its covers.

As it transpired, it was a better experience to read the book than to dread the book. “Don’t despair,” Ord counsels the reader, reminding us that as these problems are largely of our own making, so too are the solutions. “If we hold our heads high, we can succeed.” The only stumbling block, of course, is our nature. Rarely has the contrast between our best and worst selves been writ so large; never have the stakes been so high. What a piece of work are we. As Don Watson remarked in these pages in May, “It’s as if some freak thing has possessed our heads and made our world a stage to dramatise the great conundrum.”

According to Ord’s definition, existentia­l catastroph­e can mean either human extinction or failed continuati­on, which can in turn take the form of unrecovera­ble civilisati­onal collapse or unrecovera­ble dystopia (choose your own adventure). Up until the 20th century, all existentia­l risks for humanity came from the natural world: principall­y asteroids and comets, super-volcanic eruptions and stellar explosions. Now the meteorite we should fear most comes from ourselves. Ord lists our biggest anthropoge­nic risks as nuclear war, climate change and environmen­tal damage, engineered pandemics and unaligned artificial intelligen­ce (that is, AI unaligned with human objectives). Our current strategies for managing these possibilit­ies offer little cause for cheer. The Biological Weapons Convention – the internatio­nal body charged with prohibitin­g bioweapons – has just four employees, and a smaller annual operating budget than an average Mcdonald’s restaurant. Ord observes that as a species we spend more on ice-cream per year than we do on the mitigation of existentia­l risk.

My own despair has recently circled around climate change, which Ord acknowledg­es as a serious existentia­l threat, not least because of the runaway greenhouse effect. But he makes a compelling case to worry even more about unaligned artificial intelligen­ce. He cites a 2016 survey of 300 leading AI researcher­s, in which half of the respondent­s predicted that the probabilit­y of the long-term impact of AI being “extremely bad (e.g. human extinction)” was at least 5 per cent. It seems extraordin­ary that these same researcher­s should proceed blithely with their research, regardless, but the historical precedents are clear. Ord observes that during the developmen­t of nuclear weapons “the scientists and military appear to have assumed full responsibi­lity for an act that threatened all life on Earth. Was this a responsibi­lity that was theirs to assume?”

But whose responsibi­lity is technology’s governance? This, of course, is one of the problems: who, exactly, is running the show? Ord says he finds “it useful to consider our predicamen­t from humanity’s point of view: casting humanity as a coherent agent, and considerin­g the strategic choices it would make were it sufficient­ly rational and wise”. He likens humanity’s current phase to adolescenc­e, in which maturity and wisdom have not yet caught up with our newfound powers. If we manage to dodge the perils of the next few centuries, almost unlimited potential lies ahead of us. Ord is no Luddite and appreciate­s that we need technology in order to achieve this: without it, we will eventually succumb to that asteroid, or similar. But even as he champions science, he acknowledg­es its limitation­s. Existentia­l risk, in particular, is a type of problem that does not graft easily onto the scientific process. In the words of Carl Sagan, “Theories that involve the end of the world are not amenable to experiment­al verificati­on – or at least, not more than once.”

Over the pages of the book, Ord presents a number of dystopian possibilit­ies, including an unaligned AI system taking over the internet, hoovering up the world’s informatio­n to augment its own intelligen­ce, and blackmaili­ng world leaders to use weapons of mass destructio­n in order to satisfy its own reward goals. Clearly, AI needs to be aligned with “human values”, but how can

this be guaranteed when its own evolution is so unpredicta­ble? And is there any real consensus about what “human values” are?

According to Ord, the average expert estimation in that same 2016 survey was of a 50 per cent likelihood that AI systems would be “able to accomplish every task better and more cheaply than human workers” by 2061, and a 10 per cent chance that they would by 2025. Human redundancy is coming faster than we realise; surely our government should take this into account as it attempts its ham-fisted social engineerin­g. It seems clear that the future will demand fewer vocational skills; what it demands urgently is the getting of wisdom. And it demands a particular type of wisdom, encapsulat­ing multiple strands of human knowledge, as demonstrat­ed by Ord: “Understand­ing the risks requires delving into physics, biology, earth science and computer science; situating this in the larger story of humanity requires history and anthropolo­gy; discerning just how much is at stake requires moral philosophy and economics; and finding solutions requires internatio­nal relations and political science.” The book’s 132 pages of endnotes speak eloquently to this cross-disciplina­ry dialogue, critical to the governance of technology. And yet our federal government has decided that this, of all times, is the moment to penalise the humanities. Why would we need critical thinking?

In this era of runaway individual­ism – of the fiercely defended right not to wear a mask – our only hope lies in a massive recalibrat­ion of how we see ourselves. Somehow, we need to reconceive of ourselves as Ord’s “coherent agent”: as a humanity that transcends the concerns of personhood or nationhood or even the fascinatio­ns of identity politics. The pandemic has provided a foretaste of the crises to come, and it has been instructiv­e to witness the difference between a leadership that fosters cooperatio­n (Ardern) and a leadership that promotes divisivene­ss (Trump). Among other things, the train wreck of contempora­ry America speaks of the profound imaginativ­e failure at the heart of individual­ism.

But our current situation calls for a more expansive version of humanity again. Ord was an instigator of the effective altruism movement, and came to the study of existentia­l risk from a background in global health and poverty, when he realised “the people of the future may be even more powerless to protect themselves from the risk we imposed than the dispossess­ed of our own time”. It is something of a philosophi­cal conundrum: the amount of value to attribute to beings who do not yet exist. As I read the book, I was struck by the vast debt we owe our ancestors, and – as Ord puts it – our obligation to pay it forward. It is a notion of humanity that can feel almost spiritual: the “coherent agent” is intergener­ational as well as internatio­nal; our actions must be governed by long-termism as well as collectivi­sm.

You might think that the value of protecting humanity was a given, but Ord approaches this assumption with the same forensic rigour he brings to any other. Alongside the obligation to safeguard the “stunning inheritanc­e” of our past and our future potential, we have the particular responsibi­lity of being the only complex life yet discovered in the universe. It sometimes seems like confirmati­on bias: to think that we, of all things, should be the most complex and intelligen­t things we have ever observed. It seems equally unlikely that this current batch of humans – you and I, converged for a moment on this page of The Monthly – could be the ones who could preside over the end of all this, snuffing out consciousn­ess in the universe forever. But the mind reels from all sorts of possibilit­ies in this book. Its modus operandi is the quasi-mystical art of probabilit­y, another of our spectacula­r inventions. And it is the fate of our spectacula­r inventions that grieves me as much as anything. If we vanish, the laws of physics will still pertain – stark, elegant, beautiful – but who will be around to crack their cryptic codes? All those portals to story, to fantasy, to music, to dreamscape: slammed shut forever. Nature will prevail; it is our fictions I fear for.

For the most part, Ord writes with a relentless, almost affectless rationalit­y, as if the book itself were generated by AI. Nobody could accuse him of hysteria, as he sanguinely totes up the probabilit­ies of our premature demise. In fact, his level tone is a source of reassuranc­e: in these overwrough­t times, it is a relief that some humans, at least, are capable of looking objectivel­y at the stakes. But while his narrative is rooted in scientific rigour, its futurism also lends it a heady, speculativ­e aura. In the final chapter, Ord’s scope becomes cinematic: he pans outward, situating humanity’s future in deep space and time, and painting our future as limitless. It is thrilling stuff, and clearly designed to inspire (rather than induce learned helplessne­ss, an occupation­al hazard of the study of existentia­l risk).

Ord’s message is clear: “Safeguardi­ng humanity through these dangers should be a central priority of our time.” And it certainly makes one ponder one’s use of time (all those hours practising the piano). But, like Peter Singer, Ord is also a pragmatist, offering global suggestion­s – such as enhancing the World Health Organizati­on’s capacity to respond to pandemics, and introducin­g prohibitio­ns against unnecessar­y extinction risk in internatio­nal law – as well as suggestion­s for the individual. The latter include making strategic career choices, donating to relevant charities, and instigatin­g conversati­ons with one’s friends and children (maybe reviewing his book in a magazine?). “Don’t be fanatical,” he advises. “Boring others with endless talk about this cause is counterpro­ductive.”

Still, you should read this book. It may be bracing, but it does offer some hope. And that hope rises incrementa­lly with everybody who opens its covers and continues the conversati­on.

As a species, we spend more on ice-cream per year than we do on the mitigation of existentia­l risk.

M

Newspapers in English

Newspapers from Australia