AI’s recent advances could change everything
In 2018, Sundar Pichai, the CEO of Google — and not one of the tech executives known for overstatement — said, “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire”.
Try to live, for a few minutes, in the possibility that he’s right. There is no more profound human bias than the expectation that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.
Artificial intelligence is a loose term, and I mean it loosely. I am describing not the soul of intelligence, but the texture of a world populated by ChatGPT-like programs that feel to us as if they were intelligent, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in AI is the improvement curve.
“The broader intellectual world seems to wildly overestimate how long it will take AI systems to go from ‘large impact on the world’ to ‘unrecognisably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”
Perhaps the developers will hit a wall they do not expect. But what if they don’t?
Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on AI. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissively; I mean it descriptively. It is a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe.
In a 2022 survey, AI experts were asked, “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10%.
I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10% chance of wiping out humanity?
We typically reach for science fiction stories when thinking about AI. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversations with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.
I often ask them the same question: If you think calamity is so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the AI’s perspective. Many — not all, but enough that I feel comfortable in this characterisation — feel that they have a responsibility to usher this new form of intelligence into the world.
Could these systems usher in a new era of scientific progress? In 2021, a system built by DeepMind managed to predict the 3D structure of tens of thousands of proteins, an advance so remarkable that the editors of the journal Science named it their breakthrough of the year. Will AI populate our world with nonhuman companions and personalities that become our friends and our enemies and our assistants and our gurus and perhaps even our lovers? “Within two months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now ‘happily retired from human relationships’”, New York Magazine reports.
Could AI put millions out of work? Automation already has, again and again. Could it help terrorists or antagonistic states develop lethal weapons and crippling cyberattacks? These systems will already offer guidance on building biological weapons if you ask them cleverly enough. Could it end up controlling critical social processes or public infrastructure in ways we don’t understand and may not like? AI is already being used for predictive policing and judicial sentencing.
But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book High Weirdness, Erik Davis, the historian of Californian counterculture, describes weird things as “anomalous — they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically”. That is the world we’re building.
If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for AI dominance. The US and China are in a race for AI dominance. Money is gushing toward companies with AI expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.
Katja Grace, an AI safety researcher, summed up this illogic pithily. Slowing down “would involve coordinating numerous people — we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional”.
One of two things must happen. Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.
What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognise that entertaining these possibilities feels a little, yes, weird. It feels that way to me, too. Scepticism is more comfortable. But something Davis writes rings true to me: “In the court of the mind, scepticism makes a great grand vizier, but a lousy lord.”