Marysville Appeal-Democrat

Experts say AI poses ‘extinction’ level of risk

- Tribune News Service Cq-roll Call

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

WASHINGTON — Lawmakers and regulators gearing up to address risks from artificial intelligen­ce technology got another boost this week from experts warning of potential “extinction” and calling on government­s to step up regulation­s.

Senate Majority Leader Charles E. Schumer, D-N.Y., has said he and his staff have met with more than 100 CEOS, scientists and other experts to figure out how to draw up legislatio­n.

The National Telecommun­ications Informatio­n Administra­tion, or NTIA, is gathering comments from industry groups and tech experts on how to design audits that can examine AI systems and ensure they’re safe for public use. And former Federal Trade Commission officials are urging the agency to use its authority over antitrust and consumer protection to regulate the sector.

More than 350 researcher­s, executives and engineers working on AI systems added to the urgency Tuesday in a statement released by the Center for AI Safety, a nonprofit group.

Among those who signed are Geoffrey Hinton, a top Google AI scientist until he recently resigned to warn about risks of the technology; Sam Altman, CEO of Openai, the company that has developed CHATGPT; and Dario Amodei, the CEO of Anthropic, a company that focuses on AI safety.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the group said.

The experts listed eight broad categories of risk posed by AI systems that digest vast quantities of informatio­n. Those systems can create text, images and video that are difficult to distinguis­h from human-created content.

The Center for AI

Safety says AI systems can help criminals and malicious actors create chemical weapons and spread misinforma­tion, perpetuate inequaliti­es by helping small groups of people gain a lot of power, and deceive human overseers and seek power for themselves.

Schumer appears to have heard the message.

“We can’t move so fast that we do flawed legislatio­n, but there’s no time for waste or delay or sitting back,” he said on the Senate floor on May 18. “We’ve got to move fast.”

Schumer’s office didn’t respond to a question about when legislatio­n would be unveiled.

Altman himself appeared before the Senate

Judiciary Committee only two days before Schumer’s floor remarks, telling lawmakers that “regulatory interventi­on by government­s will be critical to mitigate the risks of increasing­ly powerful models.”

But Altman, who called on U.S. lawmakers to regulate the technology, balked a week later at the European Union’s effort to do so. According to the Financial Times, he told reporters in London that he had “many concerns” about the EU’S proposed

AI Act that is still being debated.

The EU proposal would categorize AI systems into three buckets: systems that pose unacceptab­le risk, such as government-run social scoring applicatio­ns found in China and elsewhere, that violate “fundamenta­l rights,” including ones doing predictive policing that would be banned; highrisk applicatio­ns such as those that scan resumes and rank job applicants that would have to meet legal and transparen­cy requiremen­ts; and a third category of systems that would be left unregulate­d.

The Financial Times reported that Altman said his Openai’s CHATGPT could end up being classified as high-risk.

“The details really matter,” Altman said. “We will try to comply, but if we can’t comply we will cease operating,” he said, referring to potentiall­y removing CHATGPT from the European

Union, according to the newspaper.

The EU rules have yet to be finalized but are expected to go into effect in 2025.

Regulators everywhere are trying to figure out how to write rules that wouldn’t be too stifling while also ensuring that people’s privacy and safety aren’t violated, said Ken Kumayama, a partner who focuses on technology issues at the law firm of Skadden, Arps in California. The technology is rapidly changing, and people’s understand­ing of risks and benefits is also evolving, he said.

“It’s more art than science,” Kumayama said in an interview. “I think it really is anyone’s guess what’s ultimately going to happen.”

The U.S. is “behind in our thinking and our drafting” of rules on artificial intelligen­ce and other aspects, Kumayama said. “We are playing catch-up.”

Lack of clear vision

While lawmakers, industry executives and experts in the

U.S. agree that “we need some regulation, some guardrails, some guidelines, some rules of the road … lawmakers in the U.S. don’t seem to have any clear vision regarding what that should look like,” Kumayama said.

One proposal from the 117th Congress, known as the Algorithmi­c Accountabi­lity Act and sponsored by Rep. Yvette D. Clarke, D-N.Y., was backed by 39 other Democrats but failed to advance in the House.

That measure, which would have empowered the FTC to assess the impact of AI systems, was a good start, but it applied only to large companies and left out smaller companies that often are the ones that drive innovation in artificial intelligen­ce, Kumayama said.

President Joe Biden in April called on tech companies to ensure that their AI systems are “safe before making them public.” His comments led the NTIA to ask industry groups and tech experts to weigh in.

“Much as financial audits create trust in the accuracy of financial statements, accountabi­lity mechanisms for AI can help assure that an AI system is trustworth­y,” Alan Davidson, the assistant secretary of communicat­ions and informatio­n at the NTIA, said at the time.

The agency has since received more than 500 comments but plans to make them public only after the comment period ends June 12,

Zahir Rasheed, an agency spokesman, said in an email.

Amba Kak and Sarah Myers West, two former FTC officials, said in a report published last month that the FTC has existing authority relating to antitrust and consumer harm to regulate the fast-growing artificial intelligen­ce tech sector even without new legislatio­n.

The report from the nonprofit AI Now Institute, titled “Confrontin­g Tech Power,” said the FTC could “enforce existing law on the books to create public accountabi­lity in the rollout of generative AI systems and prevent harm to consumers and competitio­n.”

CHATGPT is one example of a generative AI, which refers to a system that can generate text, image or video in response to prompts.

Since the FTC is focused on stopping deceptive or unfair acts or practices, some experts argue that irrespecti­ve of underlying technologi­es, if the result or an outcome is illegal or harmful to consumers, “you need to stop,” Kumayama said. “I don’t disagree with them. The law is technology­agnostic.”

Center for AI Safety

 ?? ??

Newspapers in English

Newspapers from United States