Imperial Valley Press

AI raises risk of extinction,

Experts say in new warning

- BY MATT O’BRIEN AP Technology Writer

Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligen­ce poses to humankind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement said.

Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligen­ce, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety’s website.

Worries about artificial intelligen­ce systems outsmartin­g humans and running wild have intensifie­d with the rise of a new generation of highly capable AI chatbots such as ChatGPT. It has sent countries around the world scrambling to come up with regulation­s for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.

The latest warning was intentiona­lly succinct – just a single sentence – to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them, said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move.

“There’s a variety of people from all top universiti­es in various different fields who are concerned by this and think that this is a global priority,” Hendrycks said. “So we had to get people to sort of come out of the closet, so to speak, on this issue because many were sort of silently speaking among each other.”

More than 1,000 researcher­s and technologi­sts, including Elon Musk, had signed a much longer letter earlier this year calling for a six-month pause on AI developmen­t, saying it poses “profound risks to society and humanity.”

That letter was a response to OpenAI’s release of a new AI model, GPT-4, but leaders at OpenAI, its partner Microsoft and rival Google didn’t sign on and rejected the call for a voluntary industry pause.

By contrast, the latest statement was endorsed by Microsoft’s chief technology and science officers, as well as Demis Hassabis, CEO of Google’s AI research lab DeepMind, and two Google executives who lead its AI policy efforts. The statement doesn’t propose specific remedies but some, including Altman, have proposed an internatio­nal regulator along the lines of the U.N. nuclear agency.

Some critics have complained that dire warnings about existentia­l risks voiced by makers of AI have contribute­d to hyping up the capabiliti­es of their products and distractin­g from calls for more immediate regulation­s to rein in their real-world problems.

Hendrycks said there’s no reason why society can’t manage the “urgent, ongoing harms” of products that generate new text or images, while also starting to address the “potential catastroph­es around the corner.”

He compared it to nuclear scientists in the 1930s warning people to be careful even though “we haven’t quite developed the bomb yet.”

“Nobody is saying that GPT4 or ChatGPT today is causing these sorts of concerns,” Hendrycks said. “We’re trying to address these risks before they happen rather than try and address catastroph­es after the fact.”

The letter also was signed by experts in nuclear science, pandemics and climate change. Among the signatorie­s is the writer Bill McKibben, who sounded the alarm on global warming in his 1989 book “The End of Nature” and warned about AI and companion technologi­es two decades ago in another book.

“Given our failure to heed the early warnings about climate change 35 years ago, it feels to me as if it would be smart to actually think this one through before it’s all a done deal,” he said by email Tuesday.

An academic who helped push for the letter said he used to be mocked for his concerns about AI existentia­l risk, even as rapid

advancemen­ts in machine-learning research over the past decade have exceeded many people’s expectatio­ns.

David Krueger, an assistant computer science professor at the University of Cambridge, said some of the hesitation in speaking out is that scientists don’t want to be seen as suggesting AI “consciousn­ess or AI doing something magic,” but he said AI systems don’t need to be selfaware or setting their own goals to pose a threat to humanity.

“I’m not wedded to some particular kind of risk. I think there’s a lot of different ways for things to go badly,” Krueger said. “But I think the one that is historical­ly the most controvers­ial is risk of extinction, specifical­ly by AI systems that get out of control.”

 ?? AP PHOTO/ALASTAIR GRANT ?? OpenAI’s CEO Sam Altman, the founder of ChatGPT and creator of OpenAI speaks at University College London, as part of his world tour of speaking engagement­s in London on Wednesday.
AP PHOTO/ALASTAIR GRANT OpenAI’s CEO Sam Altman, the founder of ChatGPT and creator of OpenAI speaks at University College London, as part of his world tour of speaking engagement­s in London on Wednesday.

Newspapers in English

Newspapers from United States