The Day

Tech firms agree to AI safeguards

- By MATT O’BRIEN and ZEKE MILLER

Washington — President Joe Biden said Friday that new commitment­s by Amazon, Google, Meta, Microsoft and other companies that are leading the developmen­t of artificial intelligen­ce technology to meet a set of AI safeguards brokered by his White House are an important step toward managing the “enormous” promise and risks posed by the technology.

Biden announced that his administra­tion has secured voluntary commitment­s from seven U.S. companies meant to ensure that their AI products are safe before they release them. Some of the commitment­s call for third-party oversight of the workings of the next generation of AI systems, though they don’t detail who will audit the technology or hold the companies accountabl­e.

“We must be clear eyed and vigilant about the threats emerging technologi­es can pose,” Biden said, adding that the companies have a “fundamenta­l obligation” to ensure their products are safe.

“Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden added. “These commitment­s are a promising step, but we have a lot more work to do together.”

A surge of commercial investment in generative AI tools that can write convincing­ly human-like text and churn out new images and other media has brought public fascinatio­n as well as concern about their ability to trick people and spread disinforma­tion, among other dangers.

The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independen­t experts” to guard against major risks, such as to biosecurit­y and cybersecur­ity, the White House said in a statement.

That testing will also examine the potential for societal harms, such as bias and discrimina­tion, and more theoretica­l dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

The companies have also committed to methods for reporting vulnerabil­ities to their systems and to using digital watermarki­ng to help distinguis­h between real and AI-generated images or audio known as deepfakes.

Executives from the seven companies met behind closed doors with Biden and other officials Friday as they pledged to follow the standards.

“He was very firm and clear” that he wanted the companies to continue to be innovative, but at the same time “felt that this needed a lot of attention,” Inflection CEO Mustafa Suleyman said in an interview after the White House gathering.

“It’s a big deal to bring all the labs together, all the companies,” said Suleyman, whose Palo Alto, California-based startup is the youngest and smallest of the firms. “This is supercompe­titive and we wouldn’t come together under other circumstan­ces.”

The companies will also publicly report flaws and risks in their technology, including effects on fairness and bias, according to the pledge.

The voluntary commitment­s are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.

Some advocates for AI regulation­s said Biden’s move is a start but more needs to be done to hold the companies and their products accountabl­e.

The White House said Friday that it has consulted on the voluntary commitment­s with a number of countries.

Biden announced that his administra­tion has secured voluntary commitment­s from seven U.S. companies meant to ensure that their AI products are safe before they release them.

Newspapers in English

Newspapers from United States