Las Vegas Review-Journal (Sunday)
As artificial intelligence booms, lawmakers struggle to understand the technology
WASHINGTON — In recent weeks, two members of Congress have sounded the alarm over the dangers of artificial intelligence.
Rep. Ted Lieu, D-calif., wrote in a guest essay in The New York Times in January that he was “freaked out” by the ability of the CHATGPT chatbot to mimic human writers. Rep. Jake Auchincloss, D-mass., gave a one-minute speech — written by a chatbot — calling for regulation of AI.
But even as lawmakers put a spotlight on the technology, few are taking action on it. No bill has been proposed to protect individuals or thwart the development of AI’S potentially dangerous aspects. And legislation introduced in recent years to curb AI applications like facial recognition have withered in Congress.
The problem is that most lawmakers do not even know what AI is, said Rep. Jay Obernolte, R-calif., the only member of Congress with a master’s degree in artificial intelligence.
“Before regulation, there needs to be agreement on what the dangers are, and that requires a deep understanding of what AI is,” he said. “You’d be surprised how much time I spend explaining to my colleagues that the chief dangers of AI will not come from evil robots with red lasers coming out of their eyes.”
The inaction over AI is part of a familiar pattern, in which technology is again outstripping U.S. rule-making and regulation. Lawmakers have long struggled to understand new innovations, once describing the internet as a “series of tubes.” For just as long, companies have worked to slow down regulations, saying the industry needs few roadblocks as the United States competes with China for tech leadership.
That means Washington is taking a hands-off stance as an AI boom has gripped Silicon Valley, with Microsoft, Google, Amazon and Meta racing one another to develop the technology. The spread of AI, which has spawned chatbots that can write poetry and cars that drive themselves, has provoked a debate over its limits, with some fearing that the technology can eventually replace humans in jobs or even become sentient.
Carly Kind, director of the Ada Lovelace Institute, a London organization focused on the responsible use of technology, said a lack of regulation encouraged companies to put a priority on financial and commercial interests at the expense of safety.
“By failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible AI,” she said.
In the regulatory vacuum, the European Union has taken a leadership role. In 2021, EU policymakers proposed a law focused on regulating the AI technologies that might create the most harm, such as facial recognition and applications linked to critical public infrastructure such as the water supply. The measure, which is expected to be passed as soon as this year, would require makers of AI to conduct risk assessments of how their applications could affect health, safety and individual rights, including freedom of expression.
Companies that violated the law could be fined up to 6% of their global revenue, which could total billions of dollars for the world’s largest tech platforms. EU policymakers said the law was needed to maximize artificial intelligence’s benefits while minimizing its societal risks.
“We’re at the beginning of understanding this technology and weighing its great benefits and potential dangers,” said Rep. Donald S. Beyer Jr., D-VA., who recently began taking evening college classes on AI.
Beyer said U.S. lawmakers would examine the European bill for ideas on regulation and added, “This will take time.”
In fact, the federal government has been deeply involved in AI for more than six decades. In the 1960s, the Defense Advanced Research Projects Agency, known as DARPA, began funding research and development of the technology. The support helped lead to military applications, including drones and cybersecurity tools.
Criticism of AI was largely muted in Washington until January 2015 when physicist Stephen Hawking and Elon Musk, CEO of Tesla and now the owner of Twitter, warned that AI was becoming dangerously intelligent and could lead to the end of the human race. They called for regulations.
In November 2016, the Senate Subcommittee
on Space, Science and Competitiveness held the first congressional hearing on AI, with Musk’s warnings cited twice by lawmakers. During the hearing, academics and the CEO of Openai, a San Francisco lab, batted down Musk’s predictions or said they were at least many years away.
Some lawmakers stressed the importance of the nation’s leadership in AI development. Congress must “ensure that the United States remains a global leader throughout the 21st century,” Sen. Ted Cruz, R-texas, chair of the subcommittee, said at the time.
DARPA subsequently announced that it was earmarking $2 billion for AI research projects.
Warnings about AI’S dangers intensified in 2021 as the Vatican, IBM and Microsoft pledged to develop “ethical AI,” which means organizations are transparent about how the technology works, respect privacy and minimize biases. The group called for regulation of facial recognition software, which uses large databases of photos to pinpoint people’s identity. In Washington, some lawmakers tried creating rules for facial recognition technology and for company audits to prevent discriminatory algorithms. The bills went nowhere.
“It’s not a priority and doesn’t feel urgent for members,” said Beyer, who failed to get enough support last year to pass a bill on audits of AI algorithms, sponsored with Rep. Yvette D. Clarke, D-N.Y.
More recently, some government officials have tried bridging the knowledge gap around AI. In January, about 150 lawmakers and their staffs packed a meeting, hosted by the usually sleepy AI Caucus, that featured Jack Clark, a founder of the AI company Anthropic.
Some action around AI is taking place in federal agencies, which are enforcing laws already on the books. The Federal Trade Commission has brought enforcement orders against companies that used AI in violation of its consumer protection rules. The Consumer Financial Protection Bureau has also warned that opaque AI systems used by credit agencies could run afoul of anti-discrimination laws.
The FTC has also proposed commercial surveillance regulations to curb the collection of data used in AI technology, and the Food and Drug Administration issued a list of AI technology in medical devices that come under its purview.
In October, the White House issued a blueprint for rules on AI, stressing the rights of individuals to privacy and safe automated systems, protection from algorithmic discrimination and meaningful human alternatives.
But none of the efforts have amounted to laws.
“The picture in Congress is bleak,” said Amba Kak, executive director of the AI Now Institute, a nonprofit research center, who recently advised the FTC. “The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years, AI tools have been flawed and biased.”
Tech companies have lobbied against policies that would limit how they used AI and have called for mostly voluntary regulations.
In 2020, Sundar Pichai, CEO of Alphabet, the parent of Google, visited Brussels to argue for “sensible regulation” that would not hold back the technology’s potential benefits. That same year, the U.S. Chamber of Commerce and more than 30 companies, including Amazon and Meta, lobbied against facial recognition bills, according to Opensecrets.org.
“We aren’t anti-regulation, but we’d want smart regulation,” said Jordan Crenshaw, a vice president of the Chamber of Commerce, which has argued that the draft EU law is overly broad and could hamper tech development.
In January, Sam Altman, CEO of Openai, which created CHATGPT, visited several members of Congress to demonstrate GPT-4, a new AI model that can write essays, solve complex coding problems and more, according to Beyer and Lieu. Altman, who has said he supports regulation, showed how GPT-4 will have greater security controls than previous AI models, the lawmakers said.
Lieu, who met with Altman, said the government couldn’t rely on individual companies to protect users. He plans to introduce a bill this year for a commission to study AI and for a new agency to regulate it.
“Openai decided to put controls into its technology, but what is to guarantee another company will do the same?” he asked.