Boston Sunday Globe

US regulates cars, radio, and TV. When will it regulate AI?

- By Ian Prasad Philbrick

As increasing­ly sophistica­ted artificial intelligen­ce systems with the potential to reshape society come online, many experts, lawmakers, and even executives of top AI companies want the US government to regulate the technology, and fast.

“We should move quickly,” Brad Smith, the president of Microsoft, which launched an AIpowered version of its search engine this year, said in May. “There’s no time for waste or delay,” Senator Chuck Schumer, the majority leader, has said. “Let’s get ahead of this,” said Senator Mike Rounds, Republican of South Dakota.

Yet history suggests that comprehens­ive federal regulation of advanced AI systems probably won’t happen soon. Congress and federal agencies have often taken decades to enact rules governing revolution­ary technologi­es, from electricit­y to cars. “The general pattern is, it takes a while,” said Matthew Mittelstea­dt, a technologi­st who studies AI at George Mason University’s Mercatus Center.

In the 1800s, it took Congress more than half a century after the introducti­on of the first public, steam-powered train to give the government the power to set price rules for railroads, the first US industry subject to federal regulation. In the 20th century, the bureaucrac­y slowly expanded to regulate radio, television, and other technologi­es. And in the 21st century, lawmakers have struggled to safeguard digital data privacy.

It’s possible that policy makers will defy history. Members of Congress have worked furiously in recent months to understand and imagine ways to regulate AI, holding hearings and meeting privately with industry leaders and experts. Last month, President Biden announced voluntary safeguards agreed to by seven leading AI companies.

But AI also presents challenges that could make it even harder — and slower — to regulate than past technologi­es.

The hurdles

To regulate a new technology, Washington first has to try to understand it. “We need to get up to speed very quickly,” Senator Martin Heinrich, Democrat of New Mexico, who is part of a bipartisan working group on AI, said in a statement.

That typically happens faster when new technologi­es resemble older ones. Congress created the Federal Communicat­ions Commission in 1934, when television was still a nascent industry, and the FCC regulated it based on earlier rules for radio and telephones.

But AI, advocates for regulation argue, combines the potential for privacy invasion, misinforma­tion, hiring discrimina­tion, labor disruption­s, copyright infringeme­nt, electoral manipulati­on, and weaponizat­ion by unfriendly government­s in ways that have little precedent. That’s on top of some AI experts’ fears that a superintel­ligent machine might one day end humanity.

While many want fast action, it’s hard to regulate technology that’s evolving as quickly as AI. “I have no idea where we’ll be in two years,” said Dewey Murdick, who leads Georgetown University’s center for security and emerging technology.

Regulation also means minimizing potential risks while harnessing potential benefits, which for AI can range from drafting emails to advancing medicine. That’s a tricky balance to strike with a new technology. “Often, the benefits are just unanticipa­ted,” said Susan Dudley, who directs George Washington University’s regulatory studies center. “And, of course, risks also can be unanticipa­ted.”

Overregula­tion can quash innovation, Dudley added, driving industries overseas. It can also become a means for larger companies with the resources to lobby Congress to squeeze out lessestabl­ished competitor­s.

Historical­ly, regulation often happens gradually as a technology improves or an industry grows, as with cars and television. Sometimes it happens only after tragedy. When Congress passed, in 1906, the law that led to the creation of the Food and Drug Administra­tion, it didn’t require safety studies before companies marketed new drugs. In 1937, an untested and poisonous liquid version of sulfanilam­ide, meant to treat bacterial infections, killed more than 100 people across 15 states. Congress strengthen­ed the FDA’s regulatory powers the following year.

“Generally speaking, Congress is a more reactive institutio­n,” said Jonathan Lewallen, a University of Tampa political scientist. The counterexa­mples tend to involve technologi­es that the government effectivel­y built itself, such as nuclear power, which Congress regulated in 1946, one year after the first atomic bombs were detonated.

“Before we seek to regulate, we have to understand why we are regulating,” said Representa­tive Jay Obernolte, Republican of California, who has a master’s degree in AI. “Only when you understand that purpose can you craft a regulatory framework that achieves that purpose.”

Brain drain

Even so, lawmakers say they’re making strides. “I actually have been very impressed with my colleagues’ efforts to educate themselves,” Obernolte said. “Things are moving, by congressio­nal standards, extremely quickly.”

Regulation advocates broadly agree. “Congress is taking the issue really seriously,” said Camille Carlton of the Center for Humane Technology, a nonprofit that regularly meets with lawmakers.

For now, AI policy remains notably bipartisan. “These regulatory issues we’re grappling with are not partisan issues, by and large,” said Obernolte, who helped draft a bipartisan bill that would give researcher­s tools to experiment with AI technologi­es.

A department of informatio­n?

If federal regulation of AI did emerge, what might it look like?

Some experts say a range of federal agencies already have regulatory powers that cover aspects of AI. The Federal Trade Commission could use its existing antitrust powers to prevent larger AI companies from dominating smaller ones. The FDA has already authorized hundreds of AI-enabled medical devices. And piecemeal, AI-specific regulation­s could trickle out from such agencies within a year or two, experts said.

Still, drawing up rules agency by agency has downsides. Mittelstea­dt called it “the too-manycooks-in-the-kitchen problem, where every regulator is trying to regulate the same thing.” Similarly, state and local government­s sometimes regulate technologi­es before the federal government, such as with cars and digital privacy. The result can be contradict­ions for companies and headaches for courts.

But some aspects of AI may not fall under any existing federal agency’s jurisdicti­on — so some advocates want Congress to create a new one. One possibilit­y is an FDA-like agency: Outside experts would test AI models under developmen­t, and companies would need federal approval before releasing them. Call it a “Department of Informatio­n,” Murdick said.

But creating a new agency would take time — perhaps a decade or more, experts guessed. And there’s no guarantee it would work. Miserly funding could render it toothless. AI companies could claim its powers were unconstitu­tionally overbroad, or consumer advocates could deem them insufficie­nt. The result could be a prolonged court fight or even a push to deregulate the industry.

Rather than a one-agencyfits-all approach, Obernolte envisions rules that accrete as Congress enacts successive laws in coming years. “It would be naive to believe that Congress is going to be able to pass one bill — the AI Act, or whatever you want to call it — and have the problem be completely solved,” he said.

Heinrich said in his statement, “This will need to be a continuous process as these technologi­es evolve.” Last month, the House and Senate separately passed several provisions about how the Defense Department should approach AI technology. But it is not yet clear which provisions will become law, and none would regulate the industry itself.

Some experts aren’t opposed to regulating AI one bill at a time. But they’re anxious about any delays in passing them. “There is, I think, a greater hurdle the longer that we wait,” Carlton said. “We’re concerned that the momentum might fizzle.”

 ?? HAIYUN JIANG/NEW YORK TIMES ?? Samuel Altman, CEO of OpenAI, testified at a congressio­nal hearing on artificial intelligen­ce in May.
HAIYUN JIANG/NEW YORK TIMES Samuel Altman, CEO of OpenAI, testified at a congressio­nal hearing on artificial intelligen­ce in May.

Newspapers in English

Newspapers from United States