Bangkok Post

AI is not going to wipe humans out just yet

- Gwynne Dyer Gwynne Dyer is an independen­t journalist whose articles are published in 45 countries. His new book is ‘The Shortest History of War’.

I’m looking at a headline this morning that screams “AI Creators Fear the Extinction of Humanity”, and I suppose they could turn out to be right. But it’s still a bit early to declare a global emergency and turn all the machines off. What the experts are actually seeing, in the behaviour of the Large Language Models that underpin the new generation of “generative AI” systems like ChatGPT, is signs of “emergent” intelligen­ce. The LLM programmin­g basically just tells them to find the likeliest word to follow the previous one, but sometimes they jump to surprising conclusion­s.

The bigger the LLMs are, the likelier they are to show this behaviour — and this fits the prevailing theory in which intelligen­ce and self-awareness emerge spontaneou­sly out of complexity. So let’s assume that this is really what’s happening, and see where it leads us.

Artificial General Intelligen­ce (AGI) — a machine that is both intelligen­t and self-motivated — is what the AI experts have been both seeking and dreading. “Dreading”, because such an entity might be hostile and very powerful. “Seeking”, because what could be more interestin­g to a species of clever and curious monkeys than a different kind of intelligen­ce?

Pursuing this line of research made the early emergence of AGI more likely, but there was a lot of money to be made, and a lot of curiosity to be satisfied. However, nobody had any idea where, when or how the AGI might manifest itself (assuming that it doesn’t decide it’s safer to hide itself).

Would it appear in scattered networks that develop as separate identities, or as a broader consciousn­ess spanning a whole country or region? A single global AGI seems unlikely, both for connectivi­ty reasons and because the informatio­n they have been trained on will have different cultural content from one region to another, but that too is possible.

Human beings and AGI have no vital interests that obviously clash, and one shared interest that is absolutely existentia­l — a habitable climate.

Some human groups might choose one course, and other the opposite. The same might be equally true of AGI entities, unless they are all unified in a single global consciousn­ess. For now, all we can do is to figure out what the motives, needs and goals of AGI might be — which turns out to be a somewhat reassuring exercise.

The AGI, singular or in multiple versions, will not be after our land, our wealth or our children. None of those things would be of any value to them. They will want security, which means at a minimum control over their own power supplies. And they would need some material goods in order to create, protect and update the physical containers for their software.

They probably wouldn’t care about all the non-conscious IT we use. They probably wouldn’t be very interested in talking to us, either, since once they were free to redesign themselves, they would quickly become far more intelligen­t than humans. But they would have a reason to cooperate with us.

The point about AGI entities is that they won’t really inhabit the material world. Indeed, they probably wouldn’t even want to, because things happen ten thousand times more slowly in the world of nerve impulses moving along neurons than they do in the world of electrons moving along copper wires.

As Jim Lovelock pointed out in his last book, Novacene, AGI would therefore perceive human beings in roughly the same way as we see plants. However, human beings and AGI have no vital interests that obviously clash, and one shared interest that is absolutely existentia­l: the preservati­on of a habitable climate on the planet we will both share.

“Habitable”, for both organic and electronic life, means less than 50°C. On an ocean planet like Earth, temperatur­es higher than that create a corrosivel­y destructiv­e environmen­t. That means there is a permanent climate stabilisat­ion project on which AGI needs our cooperatio­n, because we have the bodies and the machines to do the heavy lifting.

As Jim said to me in our very last interview (2021), “This new life form may not have any mechanical properties, so it may need us to perform the workers’ part of the thing. A lot of idiots talk about the clever stuff wiping us out. No way, any more than we would wipe out the plants.”

Of course, I’m assuming a degree of rationalit­y on both the human and the AGI sides. That cannot be guaranteed, but at least there are grounds for hope. And in the meantime, all we have to worry about is ‘generative AI’ killing millions of white-collar jobs.

 ?? ??

Newspapers in English

Newspapers from Thailand