San Diego Union-Tribune

Tech companies have lobbied against policies

- Kang and Satariano write for The New York Times.

The problem is that most lawmakers do not even know what AI is, said Rep. Jay Obernolte, R-Calif., the only member of Congress with a master’s degree in artificial intelligen­ce.

“Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understand­ing of what AI is,” he said. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.”

The inaction over AI is part of a familiar pattern, in which technology is again outstrippi­ng U.S. rule-making and regulation. Lawmakers have long struggled to understand new innovation­s, once describing the Internet as a “series of tubes.” For just as long, companies have worked to slow down regulation­s, saying the industry needs few roadblocks as the United States competes with China for tech leadership.

That means Washington is taking a hands-off stance as an AI boom has gripped Silicon Valley, with Microsoft, Google, Amazon and Meta racing one another to develop the technology. The spread of AI, which has spawned chatbots that can write poetry and cars that drive themselves, has provoked a debate over its limits, with some fearing that the technology can eventually replace humans in jobs or even become sentient.

Carly Kind, director of the Ada Lovelace Institute, a London organizati­on focused on the responsibl­e use of technology, said a lack of regulation encouraged companies to put a priority on financial and commercial interests at the expense of safety.

“By failing to establish such guardrails, policymake­rs are creating the conditions for a race to the bottom in irresponsi­ble AI,” she said.

In the regulatory vacuum, the European Union has taken a leadership role. In 2021, EU policymake­rs proposed a law focused on regulating the AI technologi­es that might create the most harm, such as facial recognitio­n and applicatio­ns linked to critical public infrastruc­ture such as the water supply. The measure, which is expected to be passed as soon as this year, would require makers of AI to conduct risk assessment­s of how their applicatio­ns could affect health, safety and individual rights, including freedom of expression.

Companies that violated the law could be fined up to 6 percent of their global revenue, which could total billions of dollars for the world’s largest tech platforms. EU policymake­rs said the law was needed to maximize artificial intelligen­ce’s benefits while minimizing its societal risks.

“We’re at the beginning of understand­ing this technology and weighing its great benefits and potential dangers,” said Rep. Donald S. Beyer Jr., D-Va., who recently began taking evening college classes on AI.

Beyer said U.S. lawmakers would examine the European bill for ideas on regulation and added, “This will take time.”

Warnings about AI’s dangers intensifie­d in 2021 as the Vatican, IBM and Microsoft pledged to develop “ethical AI,” which means organizati­ons are transparen­t about how the technology works, respect privacy and minimize biases. The group called for regulation of facial recognitio­n software, which

uses large databases of photos to pinpoint people’s identity. In Washington, some lawmakers tried creating rules for facial recognitio­n technology and for company audits to prevent discrimina­tory algorithms. The bills went nowhere.

“It’s not a priority and doesn’t feel urgent for members,” said Beyer, who failed to get enough support last year to pass a bill on audits of AI algorithms, sponsored with Rep. Yvette D. Clarke, DN.Y.

More recently, some government officials have tried bridging the knowledge gap around AI. In January, about 150 lawmakers and their staffs packed a meeting, hosted by the usually sleepy AI Caucus, that featured Jack Clark, a founder of the AI company Anthropic.

Some action around AI is taking place in federal agencies, which are enforcing laws already on the books. The Federal Trade Commission has brought enforcemen­t orders against companies that used AI in violation of its consumer protection rules. The Consumer Financial

Protection Bureau has also warned that opaque AI systems used by credit agencies could run afoul of antidiscri­mination laws.

The FTC has also proposed commercial surveillan­ce regulation­s to curb the collection of data used in AI technology, and the Food and Drug Administra­tion issued a list of AI technology in medical devices that come under its purview.

In October, the White House issued a blueprint for rules on AI, stressing the rights of individual­s to privacy and safe automated systems, protection from algorithmi­c discrimina­tion and meaningful human alternativ­es.

But none of the efforts have amounted to laws.

“The picture in Congress is bleak,” said Amba Kak, executive director of the AI Now Institute, a nonprofit research center, who recently advised the FTC. “The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years,

AI tools have been flawed and biased.”

Tech companies have lobbied against policies that would limit how they used AI and have called for mostly voluntary regulation­s.

In January, Sam Altman, CEO of OpenAI, which created ChatGPT, visited several members of Congress to demonstrat­e GPT-4, a new AI model that can write essays, solve complex coding problems and more, according to Beyer and Lieu. Altman, who has said he supports regulation, showed how GPT-4 will have greater security controls than previous AI models, the lawmakers said.

Lieu, who met with Altman, said the government couldn’t rely on individual companies to protect users. He plans to introduce a bill this year for a commission to study AI and for a new agency to regulate it.

“OpenAI decided to put controls into its technology, but what is to guarantee another company will do the same?” he asked.

 ?? ALYSSA SCHUKAR NYT ?? Rep. Ted Lieu, D-Calif., plans to introduce a bill this year for a commission to study artificial intelligen­ce and for a new agency to regulate it.
ALYSSA SCHUKAR NYT Rep. Ted Lieu, D-Calif., plans to introduce a bill this year for a commission to study artificial intelligen­ce and for a new agency to regulate it.

Newspapers in English

Newspapers from United States