The New York Review of Books

Miranda Seymour

- Jim Holt

Wollstonec­raft: Philosophy, Passion, and Politics by Sylvana Tomaselli Artificial Life After ‘Frankenste­in’ by Eileen Hunt Botting Frankenste­in: The 1818 Edition with Related Texts by Mary Shelley, edited and with an introducti­on and notes by David Wootton

The Precipice:

Existentia­l Risk and the

Future of Humanity by Toby Ord.

Hachette, 468 pp., $30.00;

$18.99 (paper; to be published in March)

T. S. Eliot, in his 1944 essay “What Is a Classic?,” complained that a new kind of provincial­ism was becoming apparent in our culture: “a provincial­ism, not of space, but of time.” What Eliot had in mind was provincial­ism about the past: a failure to think of dead generation­s as fully real. But one can also be guilty of provincial­ism about the future: a failure to imagine the generation­s that will come after us, to take seriously our responsibi­lities toward them.

In 1945, not long after Eliot wrote that essay, the first atomic bomb was exploded. This made the matter of provincial­ism about the future all the more acute. Now, seemingly, humanity had acquired the power to abolish its own future. A decade later Bertrand Russell and Albert Einstein issued a joint manifesto warning that nuclear weaponry posed the risk of imminent human extinction, of “universal death.” (In a letter to Einstein, Russell also predicted that the same threat would eventually be posed by biological warfare.)

By the early 1980s, more precise ideas were being put forward about how this could occur. In 1982 Jonathan Schell, in a much-discussed series of articles in The New Yorker (later published as a book, The Fate of the Earth), argued that nuclear war might well result in the destructio­n of the ozone layer, making it impossible for human life to survive on earth. In 1983 Carl Sagan and four scientific colleagues introduced the “nuclear winter” hypothesis, according to which firestorms created by a nuclear exchange, even a limited one, would darken the upper atmosphere for years, causing global crop failures, universal famine, and human extinction—an alarming scenario that helped move Ronald Reagan and Mikhail Gorbachev to negotiate reductions in their countries’ nuclear arsenals. Neither Schell nor Sagan was a philosophe­r. Yet each raised a philosophi­cal point: with the advent of nuclear weapons and other dangerous new technologi­es, we ran the risk not only of killing off all humans alive today, but also of depriving innumerabl­e generation­s of the chance to exist. Humanity’s past has been relatively brief: some 300,000 years as a species, a few thousand years of civilizati­on. Its potential future, by contrast, could extend for millions or billions of years, encompassi­ng many trillions of sentient, rational beings yet to be born. It was this future—the adulthood of humanity— that was now in jeopardy. “If our species does destroy itself,” Schell wrote, “it will be a death in the cradle—a case of infant mortality.”

The idea that potential future lives as well as actual ones must be weighed in our moral calculus was soon taken up by profession­al philosophe­rs. In 1984 Derek Parfit published his immensely influentia­l treatise Reasons and Persons, which, in addition to exploring issues of rationalit­y and personal identity

with consummate subtlety, also launched a new (and currently flourishin­g) field of moral philosophy known as “population ethics.”1 At its core is this question: How ought we to act when the consequenc­es of our actions will affect not only the well-being of future people but their very existence?

It was on the final pages of Reasons and Persons that Parfit posed an arresting hypothetic­al. Consider, he said, three scenarios:

(1) World peace.

(2) A nuclear war that kills 99 percent of the world’s population.

(3) A nuclear war that kills 100 percent of the world’s population.

Clearly, he observed, (2) is worse than (1), and (3) is worse than (2). But which is the greater of the two moral difference­s? Most people, Parfit guessed, would say the difference between (1) and (2) is greater than the difference between (2) and (3). He disagreed. “I believe that the difference between (2) and (3) is very much greater,” he wrote. Killing off that last one percent, he observed, would mean destroying the entire future of humanity—an inconceiva­bly vast reduction in the sum of possible human happiness.

Toby

Ord, the author of The Precipice, studied at Oxford under Parfit

1This field is sometimes also called “population axiology,” from the Greek word for “value,” axía.

(who died in 2017) and calls him his “mentor.” Today Ord too is a philosophe­r at Oxford and among the most prominent figures who think deeply and systematic­ally about existentia­l risks to humanity.2 Ord is a model of the engaged thinker. In addition to his academic work in applied ethics, he has advised the World Health Organizati­on, the World Bank, and the British government on issues of global health and poverty. He helped start the “effective altruism” movement and founded the organizati­on Giving What We Can, whose members have pledged more than $2 billion to “effective charities.” (Their donations to charities that distribute malaria nets have already saved more than two thousand lives.) The society’s members are governed by a pledge to dedicate at least a tenth of what they earn to the relief of human suffering, which grew out of a personal commitment that Ord had made. He has now made a further pledge to limit his personal spending to £18,000 a year and give away the rest. And he tells us that he has “signed over the entire advance and royalties from this book to

2Others include Nick Bostrom, who directs the Future of Humanity Institute at Oxford (and who was profiled in The New Yorker in 2015); Martin Rees, Britain’s astronomer royal and the author of Our Final Hour (2003); and John Leslie, a Canadian philosophe­r whose book The End of the World (1996) furnished the first analytical survey of the full range of humanextin­ction possibilit­ies. charities helping protect the longterm future of humanity.”

Ord is, in short, an admirable man. And The Precipice is in many ways an admirable book. In some 250 brisk pages, followed by another 200 or so pages of notes and technical appendices, he gives a comprehens­ive and highly readable account of the evidence bearing on various human extinction scenarios. He tells harrowing stories of how humanity has courted catastroph­e in the past—nuclear close calls, deadly pathogens escaping labs, and so forth. He wields probabilit­ies in a cogent and often counterint­uitive manner. He surveys current philosophi­cal thinking about the future of humanity and addresses issues of “cosmic significan­ce” with a light touch. And he lays out an ambitious three-step “grand strategy” for ensuring humanity’s flourishin­g into the deep future—a future that, he thinks, may see our descendant­s colonizing entire galaxies and exploring “possible experience­s and modes of thought beyond our present understand­ing.”

These are among the virtues of The Precipice. Against them, however, must be set two weaknesses, one philosophi­cal, the other analytical. The philosophi­cal one has to do with the case Ord makes for why we should care about the long-term future of humanity—a case that strikes me as incomplete. Ord confesses that as a younger man he “sometimes took comfort in the idea that perhaps the outright destructio­n of humanity would not be bad at all,” since merely possible people cannot suffer if they never come into existence. His reasons for changing his mind—for deciding that safeguardi­ng humanity’s future “could well be our most important duty”—turn out to be a mixture of classical utilitaria­n and “ideal goods”–based considerat­ions that will be familiar to philosophe­rs. But he fails to take full account of why the future disappeara­nce of humanity should matter to us, the living, in the here and now; why we should be motivated to make sacrifices today for potential future people who, if we don’t make those sacrifices, won’t even exist. From this philosophi­cal weakness, which involves a why question, stems an analytical weakness, which involves a how much question: How much should we be willing to sacrifice today in order to ensure humanity’s longterm future? Ord is ethically opposed to the economic practice of “discountin­g,” which is a way of quantitati­vely shrinking the importance of the far future. I’m with him there. But this leaves him with a difficulty that he does not quite acknowledg­e. If we are obliged to weigh the full (undiscount­ed) value of humanity’s potential future in making our decisions today, we are threatened with becoming moral slaves to that future. We will find it our duty to make enormous sacrifices for merely potential people who might exist millions of years from now, while scanting the welfare of actual people over the next few centuries. And the mathematic­s of this, as we shall see, turn out to be perverse: the more we sacrifice, the more we become obliged to sacrifice.

This is not merely a theoretica­l problem. It leads to a distorted picture of how we should distribute our present moral concerns, suggesting that we should be relatively less worried about real and ongoing developmen­ts that will gravely harm humanity without wiping it out completely (like climate change), and relatively more worried about notional threats that, however unlikely, could conceivabl­y result in human extinction (like rogue AI). Ord does not say this explicitly, but it is implied by his way of thinking. And it should give us pause.

What is the likelihood that humanity will survive even the present century? In 1980 Sagan estimated that the chance of human extinction over the next hundred years was 60 percent— meaning that humanity had less than even odds of making it beyond 2080. A careful risk analysis, however, suggests that his estimate was grossly too pessimisti­c. Ord’s accounting puts the existentia­l risk faced by humanity in the current century at about one in six: much better, but still the same odds as Russian roulette. He arrives at this estimate by surveying the “risk landscape,” whose hills and peaks represent the probabilit­ies of all the various threats to humanity’s future. This landscape turns out to have some surprising features.

What apocalypti­c scenario looms largest in your mind? Do you imagine the world ending as the result of an asteroid impact or a stellar explosion? In a nuclear holocaust or a global plague caused by biowarfare? The former possibilit­ies fall under the category of “natural” risks, the latter under “anthropoge­nic” (human-caused) risks. Natural risks have always been with us: ask the dinosaurs. Anthropoge­nic risks, by contrast, are of relatively recent vintage, dating from the beginning of the atomic era in 1945. That, Ord says, was when “our rapidly accelerati­ng technologi­cal power finally reached the threshold where we might be able to destroy ourselves,” as Einstein and Russell warned at the time.

Which category, natural or anthropoge­nic, poses the greater threat to humanity’s future? Here it is not even close. By Ord’s reckoning, the total anthropoge­nic risk over the next century is a thousand times greater than the total natural risk. In other words, humanity is far more likely to commit suicide than to be killed off by nature. It has thus entered a new age of unsustaina­bly heightened risk, what Ord calls “the Precipice.”

We know that the extinction risk posed by natural causes is relatively low because we have plenty of actuarial data. Humans have been around for about three thousand centuries. If there were a sizable per-century risk of our perishing because of a nearby star exploding, or an asteroid slamming into the earth, or a supervolca­nic eruption blackening the sky and freezing the planet, we would have departed the scene a long time ago. So, with a little straightfo­rward math, we can conclude that the total risk of our extinction by natural causes over the next century is no more than one in 10,000. (In fact, nearly all of that risk is posed by the supervolca­nic scenario, which is less predictabl­e than an asteroid impact or stellar explosion.) If natural risks were all that we had to worry about, Homo sapiens could expect to survive on earth for another million years—which, not coincident­ally, is the longevity of a typical mammalian species.

Over, then, to the anthropoge­nic category. Here, as Ord observes, we have hardly any data for calculatin­g risks. So far, we’ve survived the industrial era for a mere 260 years and the nuclear era for 75. That doesn’t tell us much, from a statistica­l point of view, about whether we’ll get through even the next century. So we have to rely on scientific reasoning. And such reasoning suggests that the greatest human-made dangers to our survival are not what you might think.

Start with the seemingly most obvious one: nuclear war. How could that result in the absolute extinction of humanity? It is often claimed that there are enough nuclear weapons in the world today to kill off all humans many times over. But this, as Ord observes, is “loose talk.” It arises from naively extrapolat­ing from the destructio­n visited on Hiroshima. That bomb killed 140,000 people. Today’s nuclear arsenal is equivalent to 200,000 Hiroshima bombs. Multiply these two numbers, and you get a death toll from an all-out nuclear war of 30 billion people—about four times the world’s current population. Hence the “many times over” claim. Ord points out that this calculatio­n makes a couple of big mistakes. First, the world’s population, unlike Hiroshima’s, is not densely concentrat­ed but spread out over a wide land area. There are not nearly enough nuclear weapons to hit every city, town, and village on earth. Second, today’s bigger nuclear bombs are less efficient at killing than the Hiroshima bomb was.3 A reasonable estimate for the death toll arising from the local effects of a full-scale nuclear war—explosions and firestorms in large cities—is 250 million: unspeakabl­e, but a long way from the absolute extinction that is Ord’s primary worry.

That leaves the global effects of nuclear war to consider. Fallout? Spreading deadly radiation across the entire surface of the earth would require a nuclear arsenal ten times the size of the current one. Destructio­n of the ozone layer? This was the danger cited by Schell in The Fate of the Earth, but the underlying theory has not held up. Nuclear winter? Here lies the greatest threat, and it is one that Ord examines in fascinatin­g (if depressing) detail, before coming to the conclusion that “nuclear winter appears unlikely to lead to our extinction.” As for the chance that it would lead merely to the unrecovera­ble collapse of civilizati­on—another form of “existentia­l catastroph­e”—he observes that New Zealand at least, owing to its coastal location, would likely survive nuclear winter “with most of their technology (and institutio­ns) intact.” A cheerful thought.

All told, Ord puts the existentia­l risk posed by nuclear war over the next century at one in one thousand, a relatively small peak in the risk landscape. 3

The blast damage scales up as the two-thirds root of the bomb’s kilotonnag­e—a fun fact for those who, like Herman Kahn, enjoy thinking about the unthinkabl­e. So whence the rest of the onein-six risk figure he arrives at? Climate change? Could global warming cause unrecovera­ble collapse or even human extinction? Here too, Ord’s prognosis, though dire, is not so dire as you might expect. On our present course, climate change will wreak global havoc for generation­s and drive many nonhuman species to extinction. But it is unlikely to wipe out humanity entirely. Even in the extreme case where global temperatur­es rise by as much as 20 degrees centigrade, there will still be enough habitable land mass, fresh water, and agricultur­al output to sustain at least a miserable remnant of us.

There is, however, at least one scenario in which climate change might indeed spell the end of human life and civilizati­on. Called the “runaway greenhouse effect,” this could arise— in theory—from an amplifying feedback loop in which heat generates water vapor (a potent greenhouse gas) and water vapor in turn traps heat. Such a feedback loop might raise the earth’s temperatur­e by hundreds of degrees, boiling off all the oceans. (“Something like this probably happened on Venus,” Ord tells us.) The runaway greenhouse effect would be fatal to most life on earth, including humans. But is it likely? Evidence from past geological eras, when the carbon content of the atmosphere was much higher than it is today, suggests not. In Ord’s summation, “It is probably physically impossible for our actions to produce the catastroph­e—but we aren’t sure.”

So he puts the chance of existentia­l doom from climate change over the next century at one in one thousand— not quite negligible, but still a comparativ­ely small peak in the risk landscape. He assigns similarly modest odds to our being doomed by other types of environmen­tal damage, like resource depletion or loss of biodiversi­ty. (For me, one of the saddest bits in the book is the claim that humans could survive the extinction of honeybees and other pollinator­s, whose disappeara­nce “would only create a 3 to 8 percent reduction in global crop production.” What a world.)

If neither nuclear war nor environmen­tal collapse accounts for the Russian roulette–level threat of doom we supposedly face over the next century, then what does? In Ord’s analysis, the tallest peaks in the existentia­l risk landscape turn out to be “unaligned artificial intelligen­ce” and “engineered pandemics.”

Start with the lesser of the two: pandemic risk. Natural pandemics have occurred throughout the existence of the human species, but they have not caused our extinction. The worst of them, at least in recorded history, was the Black Death, which came to Europe in 1347 and killed between one quarter and one half of its inhabitant­s. (It also ravaged the Middle East and Asia.) The Black Death “may have been the greatest catastroph­e humanity has seen,” Ord observes. Yet by the sixteenth century Europe had recovered. In modern times, such “natural” pandemics are, because of human activities, in some ways more dangerous: our unwholesom­e farming practices make it easy for diseases to jump from animals to humans, and jet travel spreads pathogens across the globe.

Still, the fossil record suggests that there is only a tiny per-century chance that a natural pandemic could result in universal death: about one in 10,000, Ord estimates.

Factor in human mischief, though, and the odds shorten drasticall­y. Thanks to biotechnol­ogy, we now have the power to create deadly new pathogens and to resurrect old ones in more lethal and contagious forms. As Ord observes, this power will only grow in the future. What makes biotech especially dangerous is its rapid “democratiz­ation.” Today, “online DNA synthesis services allow anyone to upload a DNA sequence of their choice then have it constructe­d and shipped to their address.” A pandemic that would wipe out all human life might be deliberate­ly engineered by “bad actors” with malign intent (like the Aum Shinrikyo cult in Japan, dedicated to the destructio­n of humanity). Or it might result from well-intentione­d research gone awry (as in 1995 when Australian scientists released a virus that unexpected­ly killed 30 million rabbits in just a few weeks). Between bioterror and bio-error, Ord puts the existentia­l risk from an “engineered pandemic” at one in thirty: a major summit in the risk landscape. That leaves what Ord deems the greatest of all existentia­l threats over the next century: artificial intelligen­ce. And he is hardly eccentric in this judgment. Fears about the destructiv­e potential of AI have been raised by figures like Elon Musk, Bill Gates, Marvin Minsky, and Stephen Hawking.4

How might AI grow potent enough to bring about our doom? It would happen in three stages. First, AI becomes able to learn on its own, without expert programmin­g. This stage has already arrived, as was demonstrat­ed in 2017 when the AI company DeepMind created a neural network that learned to play Kasparov-level chess on its own in just a few hours. Next, AI goes broad as well as deep, rivaling human intelligen­ce not just in specialize­d skills like chess but in the full range of cognitive domains. Making the transition from specialize­d AI to AGI—artificial general intelligen­ce—is the focus of much cutting-edge research today. Finally, AI comes not just to rival but to exceed human intelligen­ce—a developmen­t that, according to a 2016 survey of three hundred top AI researcher­s, has a fifty-fifty chance of occurring within four decades, and a 10 percent chance of occurring in the next five years.

But why should we fear that these ultra-intelligen­t machines, assuming they do emerge, will go rogue on us? Won’t they be programmed to serve our interests? That, as it turns out, is precisely the problem. As Ord puts it, “Our values are too complex and subtle to specify by hand.” No matter how careful we are in drawing up the machine’s “reward function”—the rule-like algorithm that steers its behavior—its actions are bound to diverge from what we really want. Getting AI in sync with human values is called the “alignment problem,” and it may be an insuperabl­e one. Nor have AI researcher­s figured out how 4There are also prominent skeptics— like Mark Zuckerberg, who has called Musk “hysterical” for making so much of the alleged dangers of AI.

to make a system that, when it notices that it’s misaligned in this way, updates its values to coincide with ours instead of ruthlessly optimizing its existing reward function (and cleverly circumvent­ing any attempt to shut it down). What would you command the superintel­ligent AI system to do? “Maximize human happiness,” perhaps? The catastroph­ic result could be something like what Goethe imagined in “The Sorcerer’s Apprentice.” And AI wouldn’t need an army of robots to seize absolute power. It could do so by manipulati­ng humans to do its destructiv­e bidding, the way Hitler, Stalin, and Genghis Khan did.

“The case for existentia­l risk from AI is clearly speculativ­e,” Ord concedes. “Indeed, it is the most speculativ­e case for a major risk in this book.” But the danger that AI, in its coming superintel­ligent and misaligned form, could wrest control from humanity is taken so seriously by leading researcher­s that Ord puts the chance of its happening at one in ten: by far the highest peak in his risk landscape.5 Add in some smaller peaks for less well understood risks (nanotechno­logy, high-energy physics experiment­s, attempts to signal possibly hostile extraterre­strials) and utterly unforeseen technologi­es just over the horizon—what might be called the “unknown unknowns”—and the putative risk landscape is complete.

So what is to be done? The practical proposals Ord lays out for mitigating existentia­l risk—greater vigilance, more research into safer technologi­es, strengthen­ing internatio­nal institutio­ns—are well thought out and eminently reasonable. Nor would they be terribly expensive to implement. We currently spend less than a thousandth of a percent of world gross world product on staving off technologi­cal selfdestru­ction—not even a hundredth of what we spend on ice cream. Just raising our expenditur­e to the ice cream threshold, as Ord suggests, would go far in safeguardi­ng humanity’s longterm potential.

But let’s consider a more theoretica­l issue: How much should we be willing to pay in principle to ensure humanity’s future? Ord does not explicitly address this question. Yet his way of thinking about the value of humanity’s future puts us on a slippery slope to a prepostero­us answer.

Start, as he does, with a simplifyin­g assumption: that the value of a century of human civilizati­on can be captured by some number V. To make things easy, we’ll pretend that V is constant from century to century. (V might be taken to quantify a hundred years’ worth of net human happiness, or of cultural achievemen­t, or some such.) Given this assumption, the longer humanity’s future continues, the greater its total value will be. If humanity went on forever, the value of its future would be infinite. But this is unlikely: eventually

5As far as I can tell, he arrives at this one-in-ten number by assuming, in broad agreement with the AI community, that the chance of AI surpassing human intelligen­ce in the next century is 50 percent, and then multiplyin­g this number by the probabilit­y that the resulting misalignme­nt will prove catastroph­ic, which he seems to put at one in five. the universe will come to some sort of end, and our descendant­s probably won’t be able to survive that. And in each century of humanity’s existence there is some chance that our species will fail to make it to the next. In the present century, as we have seen, Ord puts that chance at one in six. Let’s suppose—again, to simplify—that this risk level remains the same in the future: a one in six risk of doom per century. Then humanity’s expected survival time would be another six centuries, and the value of its future would be V multiplied by six. That is, the expected value of humanity’s future is six times the value of the present century.

Now suppose we could take actions today that would enduringly cut this existentia­l risk in half, from one in six down to one in twelve. How would that affect the expected value of humanity’s future? The answer is that the value would double, going from 6V (the old expected value) to 12V (the new expected value). That’s a net gain of six centuries worth of value! So we should be willing to pay a lot, if necessary, to reduce risk in this way.

And the math gets worse. Suppose that we could somehow eliminate all anthropoge­nic risk. We might achieve this, say, by going Luddite and stamping out each and every potentiall­y dangerous technology, seeking fulfillmen­t instead in an Arden-like existence of foraging for nuts and berries, writing lyric poems, composing fugues, and proving theorems in pure mathematic­s. Then the only existentia­l risks remaining would be the relatively tiny natural ones, which come to one in ten thousand per century. So the expected value of humanity’s future would go from 6V to 10,000V—a truly spectacula­r gain. How could we not be obliged to make whatever sacrifice this might entail, given the expected payoff in the increased value of humanity’s future? Clearly there is something amiss with this reasoning. Ord would say— indeed, does say—that humanity needs risky technologi­es like AI if it is to flourish, so “relinquish­ing further technologi­cal progress is not a solution.” But the problem is more general than that. The more we do to mitigate risk, the longer humanity’s expected future becomes. And by Ord’s logic, the longer that future becomes, the more its potential value outweighs the value of the present. As we push the existentia­l risk closer and closer to zero, expected gains in value from the very far future become ever more enormous, obliging us to make still greater expenditur­es to ensure their ultimate arrival. This combinatio­n of increasing marginal costs (to reduce risk) and increasing marginal returns (in future value) has no stable equilibriu­m point short of bankruptcy. At the limit, we should direct 100 percent of our time and energy toward protecting humanity’s long-term future against even the remotest existentia­l threats—then wrap ourselves in bubble wrap, just to be extra safe. When a moral theory threatens to make unlimited demands on us in this way, that is often taken by philosophe­rs as a sign there is something wrong with it. (This is sometimes called the “argument from excessive sacrifice.”) What could be wrong with Ord’s theory? Why does it threaten to make the demands of humanity’s future on us unmanageab­le? Perhaps the answer is to be sought in asking just why we value that future—especially the parts of it that might unfold long after we’re gone. What might go into that hypothetic­al number V that we were just bandying about?

Philosophe­rs have traditiona­lly taken two views of this matter. On one side, there are the classical utilitaria­ns, who hold that all value ultimately comes down to happiness. For them, we should value humanity’s future because of its potential contributi­on to the sum of human happiness. All those happy generation­s to come, spreading throughout the galaxy! Then there are the more Platonic philosophe­rs, who believe in objective values that transcend mere happiness. For them, we should value humanity’s future because of the “ideal goods”—knowledge, beauty, justice— with which future generation­s might adorn the cosmos. (The term “ideal goods” comes from the nineteenth­century moral philosophe­r Henry Sidgwick, who had both utilitaria­n and Platonizin­g tendencies.)

Ord cites both kinds of reasons for valuing humanity’s future. He acknowledg­es that there are difficulti­es with the utilitaria­n account, particular­ly when considerat­ions of the quantity of future people are balanced against the quality of their lives. But he seems more comfortabl­e when he doffs his utilitaria­n hat and puts on a Platonic one instead. What really moves him is humanity’s promise for achievemen­t—for exploring the entire cosmos and suffusing it with value. If we and our potential descendant­s are the only rational beings in the universe—a distinct possibilit­y, so far as we know—then, he writes, “responsibi­lity for the history of the universe is entirely on us.” Once we have reduced our existentia­l risks enough to back off from the acute danger we’re currently in—the Precipice— he encourages us to undertake what he calls “the Long Reflection” on what is the best kind of future for humanity: a reflection that, he hopes, will “deliver a verdict that stands the test of eternity.” Ord’s is a very moralizing case for why we should care about humanity’s future. It cites values—both utilitaria­n happiness and Platonic ideal goods— that might be realized many eons from now, long after we and our immediate descendant­s are dead. And since values do not diminish because of remoteness in time, we are obligated to take those remote values seriously in our current decision-making. We must not “discount” them just because they lie far over the temporal horizon. That is why the future of humanity weighs so heavily on us today, and why we should make the safeguardi­ng of that future our greatest duty, elevating it in importance above all nonexisten­tial threats—such as world poverty or climate change. Though Ord does not explicitly say that, it is the conclusion to which his reasoning seems to commit him.

As a corrective, let’s try to take a nonmoraliz­ing view of the matter. Let’s consider reasons for caring about humanity’s future that do not depend on value-based considerat­ions, whether of happiness or ideal goods. How would our lives today change if we knew that humanity was doomed to imminent extinction—say, a century from now? That is precisely the question that the philosophe­r Samuel Scheffler posed in his 2012 Tanner Lectures at Berkeley, later published in his book Death and the Afterlife.6 Suppose we discovered that the world was guaranteed to be wiped out in a hundred years’ time by a nearby supernova. Or suppose that the whole human race was suddenly rendered infertile, so that no new babies could be born.7 How would the certain prospect of humanity’s absolute extinction, not long after your own personal extinction, make you feel?

It would be “profoundly depressing”—so, at least, Scheffler plausibly maintains. And the reason is that the meaning and value of our own lives depend on their being situated in an ongoing flow of generation­s. Humanity’s extinction soon after we ourselves are gone would render our lives today in great measure pointless. Whether you are searching for a cure for cancer, or pursuing a scholarly or artistic project, or engaged in establishi­ng more just institutio­ns, a threat to the future of humanity is also a threat to the significan­ce of what you do. True, there are some aspects of our lives—friendship, sensual pleasures, games—that would retain their value even in an imminent doomsday scenario. But our long-term, goal-oriented projects would be robbed of their point. “Most of the time, we don’t think much about the question of humanity’s survival one way or the other,” Scheffler observes: 6Reviewed in these pages by Thomas Nagel, January 9, 2014.

7Something of the sort threatens to happen in the P.D. James novel Children of Men (Faber and Faber, 1992).

 ??  ?? Toba Khedoori: Untitled (Clouds—Drawing), 2004–2005
Toba Khedoori: Untitled (Clouds—Drawing), 2004–2005
 ??  ??

Newspapers in English

Newspapers from United States