Sun Sentinel Palm Beach Edition

To regulate AI, first understand it

US representa­tive returns to college classroom to learn

- By David Klepper

WASHINGTON — Don Beyer’s car dealership­s were among the first in the U.S. to set up a website. As a representa­tive, the Virginia Democrat leads a bipartisan group focused on promoting fusion energy. He reads books about geometry for fun.

So when questions about regulating artificial intelligen­ce emerged, the 73-yearold Beyer took what for him seemed like an obvious step, enrolling at George Mason University to get a master’s degree in machine learning.

In an era when lawmakers and Supreme Court justices sometimes concede they don’t understand emerging technology, Beyer’s journey is an outlier, but it highlights a broader effort by members of Congress to educate themselves about artificial intelligen­ce as they consider laws that would shape its developmen­t.

Frightenin­g to some, thrilling to others, baffling to many: Artificial intelligen­ce has been called a transforma­tive technology, a threat to democracy or even an existentia­l risk for humanity. It will fall to members of Congress to figure out how to regulate the industry in a way that encourages its potential benefits while mitigating the worst risks.

But first they have to understand what AI is, and what it isn’t.

“I tend to be an AI optimist,” Beyer said after a recent afternoon class on George Mason’s campus in suburban Virginia. “We can’t even imagine how different our lives will be in five years, 10 years, 20 years, because of AI . ... There won’t be robots with red eyes coming after us any time soon. But there are other, deeper existentia­l risks that we need to pay attention to.”

Risks like massive job losses in industries made obsolete by AI, programs that retrieve biased or inaccurate results, or deepfake images, video and audio that could be leveraged for political disinforma­tion, scams or sexual exploitati­on.

On the other side of the equation, onerous regulation­s could stymie innovation, leaving the U.S. at a disadvanta­ge as other nations look to harness the power of AI.

Striking the right balance will require input not only from tech companies but also from the industry’s critics, as well as from the industries that AI may transform. While many Americans may have formed their ideas about AI from science fiction movies such as “The Terminator” or “The Matrix,” it’s important that lawmakers have a clear-eyed understand­ing of the technology, said Rep. Jay Obernolte, R-Calif., the chairman of the House’s AI Task Force.

When lawmakers have questions about AI, Obernolte is one of the people they seek out. He studied engineerin­g and applied science at the California Institute of Technology and earned an M.S. in artificial intelligen­ce at UCLA. The California Republican also started his own video game company.

Obernolte said he has been “very pleasantly impressed” with how seriously his colleagues on both sides of the aisle are taking their responsibi­lity to understand AI.

That shouldn’t be surprising, Obernolte said. After all, lawmakers regularly vote on bills that touch on complicate­d legal, financial, health and scientific subjects. If you think computers are complicate­d, check out the rules governing Medicaid and Medicare.

Keeping up with the pace of technology has challenged Congress since the steam engine and the cotton gin transforme­d the nation’s industrial and agricultur­al sectors. Nuclear power and weaponry is another example of a highly technical subject that lawmakers have had to contend with in recent decades, according to Kenneth Lowande, a University of Michigan political scientist who has studied expertise and how it relates to policy-making in Congress.

Federal lawmakers have created several offices — the Library of Congress, the Congressio­nal Budget Office and so on — to provide resources and specialize­d input when necessary. They also rely on staff with specific expertise on subject topics, including technology.

Then there’s another, more informal form of education that many members of Congress receive.

“They have interest groups and lobbyists banging down their door to give them briefings,” Lowande said.

Beyer said he has had a lifelong interest in computers and that when AI emerged as a topic of public interest he wanted to know more. A lot more. Almost all of his fellow students are decades younger; most don’t seem that fazed when they discover their classmate is a congressma­n, Beyer said.

He said the classes, which he fits in around his busy congressio­nal schedule — are already paying off. He’s learned about the developmen­t of AI and the challenges facing the field. He said it has helped him understand the challenges — biases, unreliable data — and the possibilit­ies, like improved cancer diagnoses and more efficient supply chains.

Beyer is also learning how to write computer code.

“I’m finding that learning to code — which is thinking in this sort of mathematic­al, algorithmi­c step-by-step, is helping me think differentl­y about a lot of other things — how you put together an office, how you work a piece of legislatio­n,” Beyer said.

The work of safeguardi­ng AI has already begun, though it’s the executive branch leading the way so far. Last month, the White House unveiled new rules that require federal agencies to show their use of AI isn’t harming the public. Under an executive order issued last year, AI developers must provide informatio­n on the safety of their products.

When it comes to more substantiv­e action, America is playing catchup to the European Union, which recently enacted the world’s first significan­t rules governing the developmen­t and use of AI.

The rules prohibit some uses — routine AI-enabled facial recognitio­n by law enforcemen­t, for one — while requiring other programs to submit informatio­n about safety and public risks. The landmark law is expected to serve as a blueprint for other nations as they contemplat­e their own AI laws.

To help guide the conversati­on in the U.S., lawmakers created a new AI task force — Obernolte is co-chairman — as well as an AI Caucus made up of lawmakers with a particular expertise or interest in the topic. They’ve invited experts to brief lawmakers on the technology and its impacts.

Rep. Anna Eshoo is the Democratic chairwoman of the caucus. She represents part of California’s Silicon Valley and recently introduced legislatio­n that would require tech companies and social media platforms such as Meta, Google and TikTok to identify and label AI-generated deepfakes to ensure the public isn’t misled.

She said the caucus has already proved its worth as a “safe place” where lawmakers can ask questions, share resources and begin to craft consensus.

 ?? J. SCOTT APPLEWHITE/AP 2021 ?? Rep. Don Beyer, D-Va., speaks at the Capitol in Washington. Beyer is learning about artificial intelligen­ce by enrolling in college to get a master’s degree in machine learning.“I tend to be an AI optimist,”Beyer said.
J. SCOTT APPLEWHITE/AP 2021 Rep. Don Beyer, D-Va., speaks at the Capitol in Washington. Beyer is learning about artificial intelligen­ce by enrolling in college to get a master’s degree in machine learning.“I tend to be an AI optimist,”Beyer said.

Newspapers in English

Newspapers from United States