On artificial intelligence, the sky really may be falling
“Sometimes I think it’s as if aliens have landed and people haven’t realized because they speak very good English,” said Geoffrey Hinton, the “godfather of AI” (Artificial Intelligence), who resigned from Google and now fears his godchildren will become “things more intelligent than us, taking control.”
And 1,100 people in the business, including Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus, and engineers at Amazon, DeepMind, Google, Meta and Microsoft, signed an open letter in March calling for a six-month timeout in the development of the most powerful AI systems (anything “more powerful than GPT-4”).
There’s a media feeding frenzy about AI at the moment, and every working journalist is required to have an opinion on it.
My original article said they really should put the brakes on this experiment for a while, but I didn’t declare an emergency. We’ve been hearing warnings about AI taking over since the first Terminator movie 39 years ago, but I didn’t think it was imminent.
Luckily for me, there are some very clever people on the private distribution list for this column, and one of them instantly replied telling me that I’m wrong. The sky really is about to fall.
He didn’t say that. What he said was the ChatGPT generation of machines “can now ideate using Generative Adversarial Networks in a process actually similar to humans.” That is, they can have original ideas and they can generate them orders of magnitude faster, drawing on a far wider knowledge base, than humans.
The key concept here is Artificial General Intelligence (AGI). Ordinary AI is software that follows instructions and performs specific tasks well, but poses no threat to humanity’s dominant position in the scheme of things. Artificial General Intelligence, however, can do intellectual tasks as well as or better than human beings. Generally, better.
If you must talk about the Great Replacement, this is the one to watch. Six months ago, no AGI software existed outside of a few labs. Now, suddenly, something very close to AGI is out on the market.
A big challenge that was generally reckoned to be decades away has suddenly arrived on the doorstep, and we have no plan for how to deal with it. It might even be an existential threat, but we still don’t have a plan. That’s why so many people want a six-month timeout, but it would make more sense to demand a year-long pause starting six months ago.
ChatGPT launched only last November, but it already has over 100 million users and the website is generating 1.8 billion visitors per month. Three rival “generative AI” systems are already on the market, and commercial competition means that the notion of a pause or even a general recall is just a fantasy.
The cat is already out of the bag: anything the web knows, ChatGPT and its rivals know too. That includes every debate that human beings have ever had about the dangers of AGI, and all the proposals that have been made over the years for strangling it in its cradle.
So what we need to figure out urgently is where and how that AGI is emerging, and how to negotiate some form of peaceful coexistence with it. That won’t be easy, because we don’t even know yet whether it will come in the form of a single global AGI or many different ones. (I suspect the latter.)
And who’s “we” here? There’s nobody authorized to speak for the human race either. It could all go very wrong, but there’s no way to avoid it.