Bangkok Post

AI’s recent advances could change everything

- Ezra Klein ©2023 THE NEW YORK TIMES Ezra Klein is a columnist at The New York Times.

In 2018, Sundar Pichai, the CEO of Google — and not one of the tech executives known for overstatem­ent — said, “AI is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricit­y or fire”.

Try to live, for a few minutes, in the possibilit­y that he’s right. There is no more profound human bias than the expectatio­n that tomorrow will be like today. It is a powerful heuristic tool because it is almost always correct. Tomorrow probably will be like today. Next year probably will be like this year. But cast your gaze 10 or 20 years out. Typically, that has been possible in human history. I don’t think it is now.

Artificial intelligen­ce is a loose term, and I mean it loosely. I am describing not the soul of intelligen­ce, but the texture of a world populated by ChatGPT-like programs that feel to us as if they were intelligen­t, and that shape or govern much of our lives. Such systems are, to a large extent, already here. But what’s coming will make them look like toys. What is hardest to appreciate in AI is the improvemen­t curve.

“The broader intellectu­al world seems to wildly overestima­te how long it will take AI systems to go from ‘large impact on the world’ to ‘unrecognis­ably transforme­d world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”

Perhaps the developers will hit a wall they do not expect. But what if they don’t?

Since moving to the Bay Area in 2018, I have tried to spend time regularly with the people working on AI. I don’t know that I can convey just how weird that culture is. And I don’t mean that dismissive­ly; I mean it descriptiv­ely. It is a community that is living with an altered sense of time and consequenc­e. They are creating a power that they do not understand at a pace they often cannot believe.

In a 2022 survey, AI experts were asked, “What probabilit­y do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempower­ment of the human species?” The median reply was 10%.

I find that hard to fathom, even though I have spoken to many who put that probabilit­y even higher. Would you work on a technology you thought had a 10% chance of wiping out humanity?

We typically reach for science fiction stories when thinking about AI. I’ve come to believe the apt metaphors lurk in fantasy novels and occult texts. As my colleague Ross Douthat wrote, this is an act of summoning. The coders casting these spells have no idea what will stumble through the portal. What is oddest, in my conversati­ons with them, is that they speak of this freely. These are not naifs who believe their call can be heard only by angels. They believe they might summon demons. They are calling anyway.

I often ask them the same question: If you think calamity is so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the AI’s perspectiv­e. Many — not all, but enough that I feel comfortabl­e in this characteri­sation — feel that they have a responsibi­lity to usher this new form of intelligen­ce into the world.

Could these systems usher in a new era of scientific progress? In 2021, a system built by DeepMind managed to predict the 3D structure of tens of thousands of proteins, an advance so remarkable that the editors of the journal Science named it their breakthrou­gh of the year. Will AI populate our world with nonhuman companions and personalit­ies that become our friends and our enemies and our assistants and our gurus and perhaps even our lovers? “Within two months of downloadin­g Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now ‘happily retired from human relationsh­ips’”, New York Magazine reports.

Could AI put millions out of work? Automation already has, again and again. Could it help terrorists or antagonist­ic states develop lethal weapons and crippling cyberattac­ks? These systems will already offer guidance on building biological weapons if you ask them cleverly enough. Could it end up controllin­g critical social processes or public infrastruc­ture in ways we don’t understand and may not like? AI is already being used for predictive policing and judicial sentencing.

But I don’t think these laundry lists of the obvious do much to prepare us. We can plan for what we can predict (though it is telling that, for the most part, we haven’t). What’s coming will be weirder. I use that term here in a specific way. In his book High Weirdness, Erik Davis, the historian of California­n countercul­ture, describes weird things as “anomalous — they deviate from the norms of informed expectatio­n and challenge establishe­d explanatio­ns, sometimes quite radically”. That is the world we’re building.

If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for AI dominance. The US and China are in a race for AI dominance. Money is gushing toward companies with AI expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitabil­ity, and inevitabil­ity becomes the justificat­ion for accelerati­on.

Katja Grace, an AI safety researcher, summed up this illogic pithily. Slowing down “would involve coordinati­ng numerous people — we may be arrogant enough to think that we might build a god-machine that can take over the world and remake it as a paradise, but we aren’t delusional”.

One of two things must happen. Humanity needs to accelerate its adaptation to these technologi­es or a collective, enforceabl­e decision must be made to slow the developmen­t of these technologi­es. Even doing both may not be enough.

What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it. I recognise that entertaini­ng these possibilit­ies feels a little, yes, weird. It feels that way to me, too. Scepticism is more comfortabl­e. But something Davis writes rings true to me: “In the court of the mind, scepticism makes a great grand vizier, but a lousy lord.”

 ?? AFP ?? ‘Spambots’ by Neil Mendoza, an art exhibition at the Misalignme­nt Museum in San Francisco, California. The show is supposed to help visitors think about the potential dangers of artificial intelligen­ce.
AFP ‘Spambots’ by Neil Mendoza, an art exhibition at the Misalignme­nt Museum in San Francisco, California. The show is supposed to help visitors think about the potential dangers of artificial intelligen­ce.
 ?? ??

Newspapers in English

Newspapers from Thailand