We can’t compartmentalise away our apocalyptic future
Like most people, I’m a compartment al is er. For years I went blithely about my business — doing my work, watching movies, celebrating birthdays — while only rarely thinking about the end of the world.
But as I get older and as the threats to people and the planet grow more grave and imminent, I find it increasingly difficult to go too long without a pang of panic. It was not particularly helpful that I recently read a paper from the US National Intelligence Council talking about “existential threats” to mankind. They included “runaway artificial intelligence, engineered pandemics, nanotechnology weapons [and] nuclear war.”
These perils, as the report put it, “could damage life on a global scale.” They could mean humanity’s extinction in the relative short term. And they’re all dangers to us, created by us.
Once, I might have brushed that realisation off and headed out to lunch. This time, I mentally added climate change to the list of potential calamities, and grew worried. William Macaskill, an Oxford University philosophy professor, recently put threats like these in their proper historical context, noting that for most of mankind’s existence, we humans didn’t have the ability to destroy ourselves, at least not entirely. Of course we were oten vicious and violent, and we killed each other to the very best of our abilities. But until the mid-20th century, we didn’t have the technological wherewithal to wipe ourselves out. But then, thanks to the brilliance of our species — the same brilliance that cures diseases, erects skyscrapers and launches moon rockets — we developed the atomic bomb.
I was born in the early years of the nuclear age, only a decade ater Hiroshima, when the notion of looming Armageddon was still relatively new. In my childhood, we ducked-and-covered beneath our school desks. Bob Dylan released “Talkin’ World War III Blues.” During 1962’s Cuban Missile Crisis even President John F. Kennedy believed the chance of nuclear war was “between one-in-three and even.”
But those days seem almost quaint and comforting now. The apocalyptic hazards have multiplied. “A worrying number of risks conspire to threaten the end of humanity … ,” writes Macaskill in the current issue of Foreign Affairs, a staid journal not known for sensationalism. “Advances in weaponry, biology and computing could spell the end of the species, either through deliberate misuse or a large-scale accident.”
“There are deadly risks over the horizon for which we are not prepared,” said Sen. Rob Portman, R-ohio, recently as he and a Democratic colleague introduced the Global Catastrophic Risk Mitigation Act, to ensure the US is beter prepared for “high consequence events, regardless of low probability.” Shaken, I began to read up. I hadn’t focused on the dangers of runaway artificial intelligence or worried much when Elon Musk (a known shoot-from-the-hipper) said machines would overtake humans by 2025 and constituted a “fundamental existential risk.” But it seems that plenty of other scientists and chief executives and government officials, including Bill Gates and Stephen Hawking (before he died), have also worried about whether we’re in full control of the technology we’re developing. The nightmare scenario appears to be that machine intelligence could surpass human intelligence and turn destructive, either maliciously or by accident. It doesn’t seem imminent, and AI’S danger is oten hyped or conflated with sci-fi, but the danger is not nonexistent either.
Of more immediate concern is climate change. It’ s less dramatic perhaps, but also more unstoppable because we’ve dithered for so long. The parade of climate horribles if emissions continue to rise unabated goes well beyond hot days, brownouts and lawn-watering restrictions. Ultimately, water scarcity and intensified heat could lead to food shortages and malnutrition, mass migrations of tens of millions of people, conflict and war from heightened competition for minerals and water, and collapsed economies.