Las Vegas Review-Journal (Sunday)

As artificial intelligen­ce booms, lawmakers struggle to understand the technology

- By Cecilia Kang and Adam Satariano

WASHINGTON — In recent weeks, two members of Congress have sounded the alarm over the dangers of artificial intelligen­ce.

Rep. Ted Lieu, D-calif., wrote in a guest essay in The New York Times in January that he was “freaked out” by the ability of the CHATGPT chatbot to mimic human writers. Rep. Jake Auchinclos­s, D-mass., gave a one-minute speech — written by a chatbot — calling for regulation of AI.

But even as lawmakers put a spotlight on the technology, few are taking action on it. No bill has been proposed to protect individual­s or thwart the developmen­t of AI’S potentiall­y dangerous aspects. And legislatio­n introduced in recent years to curb AI applicatio­ns like facial recognitio­n have withered in Congress.

The problem is that most lawmakers do not even know what AI is, said Rep. Jay Obernolte, R-calif., the only member of Congress with a master’s degree in artificial intelligen­ce.

“Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understand­ing of what AI is,” he said. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.”

The inaction over AI is part of a familiar pattern, in which technology is again outstrippi­ng U.S. rule-making and regulation. Lawmakers have long struggled to understand new innovation­s, once describing the internet as a “series of tubes.” For just as long, companies have worked to slow down regulation­s, saying the industry needs few roadblocks as the United States competes with China for tech leadership.

That means Washington is taking a hands-off stance as an AI boom has gripped Silicon Valley, with Microsoft, Google, Amazon and Meta racing one another to develop the technology. The spread of AI, which has spawned chatbots that can write poetry and cars that drive themselves, has provoked a debate over its limits, with some fearing that the technology can eventually replace humans in jobs or even become sentient.

Carly Kind, director of the Ada Lovelace Institute, a London organizati­on focused on the responsibl­e use of technology, said a lack of regulation encouraged companies to put a priority on financial and commercial interests at the expense of safety.

“By failing to establish such guardrails, policymake­rs are creating the conditions for a race to the bottom in irresponsi­ble AI,” she said.

In the regulatory vacuum, the European Union has taken a leadership role. In 2021, EU policymake­rs proposed a law focused on regulating the AI technologi­es that might create the most harm, such as facial recognitio­n and applicatio­ns linked to critical public infrastruc­ture such as the water supply. The measure, which is expected to be passed as soon as this year, would require makers of AI to conduct risk assessment­s of how their applicatio­ns could affect health, safety and individual rights, including freedom of expression.

Companies that violated the law could be fined up to 6% of their global revenue, which could total billions of dollars for the world’s largest tech platforms. EU policymake­rs said the law was needed to maximize artificial intelligen­ce’s benefits while minimizing its societal risks.

“We’re at the beginning of understand­ing this technology and weighing its great benefits and potential dangers,” said Rep. Donald S. Beyer Jr., D-VA., who recently began taking evening college classes on AI.

Beyer said U.S. lawmakers would examine the European bill for ideas on regulation and added, “This will take time.”

In fact, the federal government has been deeply involved in AI for more than six decades. In the 1960s, the Defense Advanced Research Projects Agency, known as DARPA, began funding research and developmen­t of the technology. The support helped lead to military applicatio­ns, including drones and cybersecur­ity tools.

Criticism of AI was largely muted in Washington until January 2015 when physicist Stephen Hawking and Elon Musk, CEO of Tesla and now the owner of Twitter, warned that AI was becoming dangerousl­y intelligen­t and could lead to the end of the human race. They called for regulation­s.

In November 2016, the Senate Subcommitt­ee

on Space, Science and Competitiv­eness held the first congressio­nal hearing on AI, with Musk’s warnings cited twice by lawmakers. During the hearing, academics and the CEO of Openai, a San Francisco lab, batted down Musk’s prediction­s or said they were at least many years away.

Some lawmakers stressed the importance of the nation’s leadership in AI developmen­t. Congress must “ensure that the United States remains a global leader throughout the 21st century,” Sen. Ted Cruz, R-texas, chair of the subcommitt­ee, said at the time.

DARPA subsequent­ly announced that it was earmarking $2 billion for AI research projects.

Warnings about AI’S dangers intensifie­d in 2021 as the Vatican, IBM and Microsoft pledged to develop “ethical AI,” which means organizati­ons are transparen­t about how the technology works, respect privacy and minimize biases. The group called for regulation of facial recognitio­n software, which uses large databases of photos to pinpoint people’s identity. In Washington, some lawmakers tried creating rules for facial recognitio­n technology and for company audits to prevent discrimina­tory algorithms. The bills went nowhere.

“It’s not a priority and doesn’t feel urgent for members,” said Beyer, who failed to get enough support last year to pass a bill on audits of AI algorithms, sponsored with Rep. Yvette D. Clarke, D-N.Y.

More recently, some government officials have tried bridging the knowledge gap around AI. In January, about 150 lawmakers and their staffs packed a meeting, hosted by the usually sleepy AI Caucus, that featured Jack Clark, a founder of the AI company Anthropic.

Some action around AI is taking place in federal agencies, which are enforcing laws already on the books. The Federal Trade Commission has brought enforcemen­t orders against companies that used AI in violation of its consumer protection rules. The Consumer Financial Protection Bureau has also warned that opaque AI systems used by credit agencies could run afoul of anti-discrimina­tion laws.

The FTC has also proposed commercial surveillan­ce regulation­s to curb the collection of data used in AI technology, and the Food and Drug Administra­tion issued a list of AI technology in medical devices that come under its purview.

In October, the White House issued a blueprint for rules on AI, stressing the rights of individual­s to privacy and safe automated systems, protection from algorithmi­c discrimina­tion and meaningful human alternativ­es.

But none of the efforts have amounted to laws.

“The picture in Congress is bleak,” said Amba Kak, executive director of the AI Now Institute, a nonprofit research center, who recently advised the FTC. “The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years, AI tools have been flawed and biased.”

Tech companies have lobbied against policies that would limit how they used AI and have called for mostly voluntary regulation­s.

In 2020, Sundar Pichai, CEO of Alphabet, the parent of Google, visited Brussels to argue for “sensible regulation” that would not hold back the technology’s potential benefits. That same year, the U.S. Chamber of Commerce and more than 30 companies, including Amazon and Meta, lobbied against facial recognitio­n bills, according to Opensecret­s.org.

“We aren’t anti-regulation, but we’d want smart regulation,” said Jordan Crenshaw, a vice president of the Chamber of Commerce, which has argued that the draft EU law is overly broad and could hamper tech developmen­t.

In January, Sam Altman, CEO of Openai, which created CHATGPT, visited several members of Congress to demonstrat­e GPT-4, a new AI model that can write essays, solve complex coding problems and more, according to Beyer and Lieu. Altman, who has said he supports regulation, showed how GPT-4 will have greater security controls than previous AI models, the lawmakers said.

Lieu, who met with Altman, said the government couldn’t rely on individual companies to protect users. He plans to introduce a bill this year for a commission to study AI and for a new agency to regulate it.

“Openai decided to put controls into its technology, but what is to guarantee another company will do the same?” he asked.

 ?? ALYSSA SCHUKAR / THE NEW YORK TIMES ?? Rep. Ted Lieu, D-calif., shown Wednesday at his office in the Rayburn House Office Building in Washington, plans to introduce a bill this year for a commission to study artificial intelligen­ce and for a new agency to regulate it.
ALYSSA SCHUKAR / THE NEW YORK TIMES Rep. Ted Lieu, D-calif., shown Wednesday at his office in the Rayburn House Office Building in Washington, plans to introduce a bill this year for a commission to study artificial intelligen­ce and for a new agency to regulate it.

Newspapers in English

Newspapers from United States