The Guardian Australia

Power-hungry robots, space colonizati­on, cyborgs: inside the bizarre world of ‘longtermis­m’

- J Oliver Conroy

Most of us don’t think of power-hungry killer robots as an imminent threat to humanity, especially when poverty and the climate crisis are already ravaging the Earth.

This wasn’t the case for Sam Bankman-Fried and his followers, powerful actors who have embraced a school of thought within the effective altruism movement called “longtermis­m”.

In February, the Future Fund, a philanthro­pic organizati­on endowed by the now-disgraced cryptocurr­ency entreprene­ur, announced that it would be disbursing more than $100m – and possibly up to $1bn – this year on projects to “improve humanity’s longterm prospects”.

The slightly cryptic reference might have been a bit puzzling to those who think of philanthro­py as funding homelessne­ss charities and medical NGOs in the developing world. In fact, the Future Fund’s particular areas of interest include artificial intelligen­ce, biological weapons and “space governance”, a mysterious term referring to settling humans in space as a potential “watershed moment in human history”.

Out-of-control artificial intelligen­ce was another area of concern for Bankman-Fried – so much so that in September the Future Fund announced prizes of up to $1.5m to anyone who could make a persuasive estimate of the threat that unrestrain­ed AI might pose to humanity.

“We think artificial intelligen­ce” is “the developmen­t most likely to dramatical­ly alter the trajectory of humanity this century”, the Future Fund said. “With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitati­ng disease.” But AI could also “acquire undesirabl­e objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future”.

Less than two months after the contest was announced, Bankman-Fried’s $32bn cryptocurr­ency empire had collapsed, much of the Future Fund’s senior leadership had resigned and its AI prizes may never be rewarded.

Nor will most of the millions of dollars that Bankman-Fried had promised a constellat­ion of charities and thinktanks affiliated with effective altruism, a once-obscure ethical movement that has become influentia­l in Silicon Valley and the highest echelons of the internatio­nal business and political worlds.

•••

Longtermis­ts argue that the welfare of future humans is as morally important – or more important – than the lives of current ones, and that philanthro­pic resources should be allocated to predicting, and defending against, extinction-level threats to humanity.

But rather than giving out malaria nets or digging wells, longtermis­ts prefer to allocate money to researchin­g existentia­l risk, or “x-risk”.

In his recent book What We Owe the Future, William MacAskill – a 35year-old moral philosophe­r at Oxford who has become the public intellectu­al face of effective altruism – makes a case for longtermis­m with a thought experiment about a hiker who accidental­ly shatters a glass bottle on a trail. A conscienti­ous person, he holds, would immediatel­y clean up the glass to avoid injuring the next hiker – whether that person comes in a week or in a century.

Similarly, MacAskill argues that the number of potential future humans, over many generation­s for the duration of the species, far outnumbers the number currently alive; if we truly believe that all humans are equal, protecting future humans is more important than protecting human lives today.

Some of longtermis­ts’ funding interests, such as nuclear nonprolife­ration and vaccine developmen­t, are fairly uncontrove­rsial. Others are more outlandish: investing in space colonizati­on, preventing the rise of powerhungr­y AI, cheating death through “lifeextens­ion” technology. A bundle of ideas known as “transhuman­ism” seeks to upgrade humanity by creating digital versions of humans, “bioenginee­ring” human-machine cyborgs and the like.

People like the futurist Ray Kurzweil and his adherents believe that biotechnol­ogy will soon “enable a union between humans and genuinely intelligen­t computers and AI systems”, Robin McKie explained in the Guardian in 2018. “The resulting human-machine mind will become free to roam a universe of its own creation, uploading itself at will onto a ‘suitably powerful computatio­nal substrate’,” and thereby creating a kind of immortalit­y.

•••

This feverish techno-utopianism distracts funders from pressing problems that already exist here on Earth, said Luke Kemp, a research associate at the University of Cambridge’s Centre for the Study of Existentia­l Risk who describes himself as an “EA-adjacent” critic of effective altruism. Left on the table, he says, are critical and credible threats that are happening right now,

such as the climate crisis, natural pandemics and economic inequality.

“The things they push tend to be things that Silicon Valley likes,” Kemp said. They’re the kinds of speculativ­e, futurist ideas that tech billionair­es find intellectu­ally exciting. “And they almost always focus on technologi­cal fixes” to human problems “rather than political or social ones”.

There are other objections. For one thing, lavishly expensive, experiment­al bioenginee­ring would be accessible, especially initially, to “only a tiny sliver of humanity”, Kemp said; it could bring about a future caste system in which inequality is not only economic, but biological.

This thinking is also dangerousl­y undemocrat­ic, he argued. “These big decisions about the future of humanity should be decided by humanity. Not by just a couple of white male philosophe­rs at Oxford funded by billionair­es. It is literally the most powerful, and least representa­tive, strata of society imposing a particular vision of the future which suits them.”

Kemp added: “I don’t think EAs – or at least the EA leadership – care very much about democracy.” In its more dogmatic varieties, he said, longtermis­m is preoccupie­d with “rationalit­y, hardcore utilitaria­nism, a pathologic­al obsession with quantifica­tion and neoliberal economics”.

Organizati­ons such as 80,000 Hours, a program for early-career profession­als, tend to encourage would-be effective altruists into four main areas, Kemp said: AI research, research preparing for human-made pandemics, EA community-building and “global priorities research”, meaning the question of how funding should be allocated.

The first two areas, though worthy of study, are “highly speculativ­e”, Kemp said, and the second two are “self-serving”, since they channel money and energy back into the movement.

This year, the Future Fund reports having recommende­d grants to worthy-seeming projects as various as research on “the feasibilit­y of inactivati­ng viruses via electromag­netic radiation” ($140,000); a project connecting children in India with online science, technology, engineerin­g and mathematic­s education ($200,000); research on “disease-neutralizi­ng therapeuti­c antibodies” ($1.55m); and research on childhood lead exposure ($400,000).

But much of the Future Fund’s largesse seems to have been invested in longtermis­m itself. It recommende­d $1.2m to the Global Priorities Institute; $3.9m to the Long Term Future Fund; $2.9m to create a “longtermis­t coworking office in London”; $3.9m to create a “longtermis­t coworking space in Berkeley”; $700,000 to the Legal Priorities Project, a “longtermis­t legal research and field-building organizati­on”; $13.9m to the Centre for Effective Altruism; and $15m to Longview Philanthro­py to execute “independen­t grantmakin­g on global priorities research, nuclear weapons policy, and other longtermis­t issues.”

Kemp argued that effective altruism and longtermis­m often seem to be working toward a kind of regulatory capture. “The long-term strategy is getting EAs and EA ideas into places like the Pentagon, the White House, the British government and the UN” to influence public policy, he said.

There may be a silver lining in the timing of Bankman-Fried’s downfall. “In a way, it’s good that it happened now rather than later,” Kemp said. “He was planning on spending huge amounts of money on elections. At one stage, he said he was planning to spend up to a billion dollars, which would have made him the biggest donor in US political history. Can you imagine if that amount of money contribute­d to a Democratic victory – and then turned out to have been based on fraud? In an already fragile and polarized society like the US? That would have been horrendous.”

•••

“The main tension to the movement, as I see it, is one that many movements deal with,” said Benjamin Soskis, a historian of philanthro­py and a senior research associate at the Urban Institute. “A movement that was primarily fueled by regular people – and their passions, and interests, and different kinds of provenance – attracted a number of very wealthy funders,” and came to be driven by “the funding decisions, and sometimes just the public identities, of people like SBF and Elon Musk and a few others”. (Soskis noted that he has received funding from Open Philanthro­py, an EA-affiliated foundation.)

Effective altruism put BankmanFri­ed, who lived in a luxury compound in the Bahamas, “on a pedestal, as this Corolla-driving, beanbag-sleeping, earning-to-give monk, which was clearly false”, Kemp said.

Soskis thinks that effective altruism has a natural appeal to people in tech and finance – who tend to have an analytical and calculatin­g way of thinking about problems – and EA, like all movements, spreads through social and work networks.

Effective altruism is also attractive to wealthy people, Soskis believes, because it offers “a way to understand the marginal value of additional dollars”, particular­ly when talking of “vast sums that can defy comprehens­ion”. The movement’s focus on numbers (“shut up and multiply”) helps hyper-wealthy people understand more concretely what $500m can do philanthro­pically versus, say, $500,000 or $50,000.

One positive outcome, he thinks, is that EA-influenced donors publicly discuss their philanthro­pic commitment­s and encourage others to make them. Historical­ly, Americans have tended to regard philanthro­py as a private matter.

But there’s something “which I think you can’t escape”, Soskis said. Effective altruism “isn’t premised on a strong critique of the way that money has been made. And elements of it were construed as understand­ing capitalism more generally as a positive force, and through a kind of consequent­ialist calculus. To some extent, it’s a safer landing spot for folks who want to sequester their philanthro­pic decisions from a broader political debate about the legitimacy of certain industries or ways of making money.”

Kemp said that it is rare to hear EAs, especially longtermis­ts, discuss issues such as democracy and inequality. “Honestly, I think that’s because it is something the donors don’t want us talking about.” Cracking down on tax avoidance, for example, would lead to major donors “losing both power and wealth”.

The downfall of Bankman-Fried’s crypto empire, which has jeopardize­d the Future Fund and countless other longtermis­t organizati­ons, may be revealing. Longtermis­ts believe that future existentia­l risks to humanity can be accurately calculated – yet, as the economist Tyler Cowen recently pointed out, they couldn’t even predict the existentia­l threat to their own flagship philanthro­pic organizati­on.

There must be “soul-searching”, Soskis said. “Longtermis­m has a stain on it and I’m not sure when or if it will be fully removed.”

“A billionair­e is a billionair­e,” the journalist Anand Giridharad­as wrote recently on Twitter. His 2018 book Winners Take All sharply criticized the idea that private philanthro­py will solve human problems. “Stop believing in good billionair­es. Start organizing toward a good society.”

 ?? Photograph: Saul Loeb/AFP/Getty Images ?? Samuel Bankman-Fried, the founder and CEO of FTX, and an adopter of ‘longtermis­m’.
Photograph: Saul Loeb/AFP/Getty Images Samuel Bankman-Fried, the founder and CEO of FTX, and an adopter of ‘longtermis­m’.
 ?? Photograph: Callaghan O’Hare/Reuters ?? SpaceX’s Elon Musk gives an update on the company’s Mars rocket Starship. Musk is a proponent of longtermis­m.
Photograph: Callaghan O’Hare/Reuters SpaceX’s Elon Musk gives an update on the company’s Mars rocket Starship. Musk is a proponent of longtermis­m.

Newspapers in English

Newspapers from Australia