USA TODAY International Edition

Scientists: Keep grip on new tool

Open letter seeks mitigation of dire risks, including extinction

- GETTY IMAGES Josh Meyer See AI THREAT, Page 6A

Hundreds of scientists, tech industry execs and public figures – including leaders of Google, Microsoft and ChatGPT – are sounding the alarm about artificial intelligen­ce, writing in a new public statement that fast- evolving AI technology could create as high a risk of killing off humankind as nuclear war and COVID- 19- like pandemics.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal- scale risks such as pandemics and nuclear war,” said the one- sentence statement, which was released by the Center for AI Safety, or CAIS, a San Francisco- based nonprofit organizati­on.

CAIS said it released the statement as a way of encouragin­g AI experts, journalist­s, policymake­rs and the public to talk more about urgent risks relating to artificial intelligen­ce.

Among the 350 signatorie­s of the public statement were executives from the top four AI firms, OpenAI, Google DeepMind, Microsoft and Anthropic. One of them is renowned researcher and “Godfather of AI” Geoffrey Hinton, who quit his job as a vice president of Google so he could speak freely of the dangers of a technology he helped develop.

Also signing the statement: Sam Altman, the chief executive of OpenAI, the firm behind the popular conversati­on bot ChatGPT, which has made AI accessible to millions of users and allowed them to pose questions to it. Demis Hassabis, who heads Google’s AI division, also signed the statement.

Altman, Hinton and other industry leaders have become increasing­ly vocal about their concerns about AI and the need for some kind of technologi­cal guardrails for it, including government regulation.

“I think if this technology goes wrong, it can go quite wrong,” Altman said in a recent Senate Judiciary subcommitt­ee hearing about potential oversight of AI. “And we want to be vocal about that. We want to work with the government to prevent that from happening.”

The CAIS statement allows readers to view signatorie­s based on whether they are AI Scientists or “other notable figures.” The non- scientists include noted legal scholar Laurence Tribe of Harvard University, longtime climate change activist Bill McKibben and musician and artist known as Grimes, who shares two children with Tesla and Twitter leader Elon Musk. Musk himself is not one of the signatorie­s.

Why is AI so alarming?

When he quit Google, the 75- year- old Hinton said he had grown increasing­ly uncomforta­ble with the rapid advances in AI technology, including its ability to show early indication­s of being able to develop simple cognitive reasoning.

Asked by a conference panel moderator what was the “worst case scenario that you think is conceivabl­e,” Hinton replied without hesitation. “I think it’s quite conceivabl­e,” he said, “that humanity is just a passing phase in the evolution of intelligen­ce.”

In his Senate testimony two weeks ago, Altman told senators that one potential concern would be if AI can develop the ability to “self replicate and self-extricate into the wild.”

In Altman’s submitted written testimony for that hearing, he disclosed that external AI safety experts invited to review its latest chatbot version “helped identify potential concerns with GPT- 4 in areas including the generation of inaccurate informatio­n ( known as ‘ hallucinat­ions’), hateful content, disinforma­tion, and informatio­n related to the proliferat­ion of convention­al and unconventi­onal weapons.”

Do all experts agree?

No. Some cybersecur­ity experts described Hinton’s comments after quitting Google as being extreme and unwarrante­d, including Michael Hamilton, a co- founder of the Critical Insight risk management firm and former vice chair of the Department of Homeland Security’s State, Local, Tribal and Territoria­l Government Coordinati­ng Council.

Hamilton downplayed Hinton’s concerns in an interview with USA TODAY, saying that artificial intelligen­ce is basically an extremely sophistica­ted programmin­g platform that can only go so far as humans allow it. It cannot, for instance, become the kind of sentient, self- aware and all- knowing technology that created the Terminator of movie fame, Hamilton said.

And Altman, Hinton and other AI leaders have praised the huge potential AI has for helping humanity in a broad array of areas, from industry and education to improving the ability of disabled people to see, hear and communicat­e.

How did open letter come about?

The letter was posted by CAIS, which states that it believes that artificial intelligen­ce “has the potential to profoundly benefit the world, provided that we can develop and use it safely.”

“Current systems already can pass the bar exam, write code, fold proteins, and even explain humor,” CAIS says on its website. “Like any other powerful technology, AI also carries inherent risks, including some which are potentiall­y catastroph­ic.”

CAIS said when posting the new statement that many basic problems in AI safety have yet to be solved, and its mission is to reduce societal- scale risks associated with artificial intelligen­ce by conducting safety research, building the field of AI safety researcher­s, and advocating for safety standards.

What is being done about AI?

Congress has held numerous hearings on the issue, calling in the top executives of the big AI firms.

At the recent Senate Judiciary subcommitt­ee hearing, the chairman – Sen. Richard Blumenthal, D- Conn. – said he asked an AI chatbot to go through his floor speeches and deliver his opening statement for him. In sound and substance, Blumenthal noted, the chatbot mimicked his voice, and his concerns, precisely.

On May 4, following a White House meeting with tech leaders, the Biden administra­tion announced a series of steps, including $ 140 million in new research efforts, to promote responsibl­e innovation in the field of artificial intelligen­ce and protect people’s rights and safety.

Under the new Biden initiative, the National Science Foundation will launch seven new National AI Research Institutes to bring together federal agencies, private- sector developers and academia in pursuing ethical and responsibl­e developmen­t of AI that serves the public good. The new Institutes will advance AI research and developmen­t in critical areas, including climate change, agricultur­e, energy, public health, education and cybersecur­ity.

 ?? ??

Newspapers in English

Newspapers from United States