The Week (US)

AI: U.S. pushes tech companies for guardrails

-

If AI is going to be regulated, the companies creating it will have to adopt their own rules—at least for now, said Cat Zakrzewski in The Washington Post. With legislatio­n bogged down on Capitol Hill, the White House last week secured pledges from seven leading AI companies to address the risks of AI. The seven companies—including Google, Meta, Amazon, Microsoft, and OpenAI—promised to put some guardrails on the technology, such as researchin­g bias and having independen­t security experts test their systems. The pledges, however, don’t include “deadlines or reporting requiremen­ts.” That will “complicate regulators’ efforts to hold the companies to their promises” and makes the deal largely a “stopgap measure” while the White House “throws its weight behind bipartisan efforts in Congress to craft AI rules.”

It’s hard to believe anything in these voluntary agreements will lead to big changes, said Kevin Roose in The New York Times. “There are some types of AI risk—such as the danger that AI models could be used to develop bioweapons—that government and military officials are probably better suited than companies to evaluate.” Getting a public commitment to bring in such experts is a good idea, and a commitment to invest in cybersecur­ity also “feels like a no-brainer.” However, other pledges made by the AI leaders are fuzzy and vague, making this White House deal seem “more symbolic than substantiv­e.”

“These commitment­s don’t go nearly as far as provisions in a bevy of draft regulatory bills,” said Ryan Heath and Ashley Gold in Axios. That’s exactly what’s intended: The voluntary pledges may end up helping the industry “slow-walk” legislatio­n. The voluntary approach is similar to “the soft touch deployed by the Obama administra­tion toward social media companies a decade ago.” We all know how that “turned sour.” And it’s notable that companies are making these voluntary commitment­s even as they are already getting investigat­ed for antitrust and privacy violations.

Federal regulators, too, like to compare this period in AI with “the dawn of social media,” said Jessica Melugin in National Review. They see the rise of social media as a “tale of regulatory failure,” and Federal Trade Commission chair Lina Khan “laments the lack of government interventi­on.” Never mind that a “light touch” regulatory approach gave us a $2.4 trillion digital economy. The FTC, exploiting “scary science fiction scenarios,” has already tried to bully its way into the AI debate by sending OpenAI an investigat­ive letter with 20 pages of demands. Some “trade-offs between gains and risks” are unavoidabl­e with new technology. The White House and regulators believe they’re best equipped to decide on what those trade-offs should be. Too bad the government’s track record says otherwise.

 ?? ?? Will government or industry lead on AI rules?
Will government or industry lead on AI rules?

Newspapers in English

Newspapers from United States