BusinessMirror

A US congressma­n wanted to understand AI. So he went back to a college classroom to learn

- By David Klepper

WASHINGTON—DON Beyer’s car dealership­s were among the first in the US to set up a website. As a representa­tive, the Virginia Democrat leads a bipartisan group focused on promoting fusion energy. He reads books about geometry for fun.

So when questions about regulating artificial intelligen­ce emerged, the 73-year-old Beyer took what for him seemed like an obvious step, enrolling at George Mason University to get a master’s degree in machine learning. In an era when lawmakers and Supreme Court justices sometimes concede they don’t understand emerging technology, Beyer’s journey is an outlier, but it highlights a broader effort by members of Congress to educate themselves about artificial intelligen­ce as they consider laws that would shape its developmen­t.

Frightenin­g to some, thrilling to others, baffling to many: Artificial intelligen­ce has been called a transforma­tive technology, a threat to democracy or even an existentia­l risk for humanity. It will fall to members of Congress to figure out how to regulate the industry in a way that encourages its potential benefits while mitigating the worst risks.

But first they have to understand what AI is, and what it isn’t.

“I tend to be an AI optimist,” Beyer told The Associated Press following a recent afternoon class on George Mason’s campus in suburban Virginia. “We can’t even imagine how different our lives will be in five years, 10 years, 20 years, because of AI .... There won’t be robots with red eyes coming after us any time soon. But there are other deeper existentia­l risks that we need to pay attention to.”

Risks like massive job losses in industries made obsolete by AI, programs that retrieve biased or inaccurate results, or deepfake images, video and audio that could be leveraged for political disinforma­tion, scams or sexual exploitati­on. On the other side of the equation, onerous regulation­s could stymie innovation, leaving the US at a disadvanta­ge as other nations look to harness the power of AI.

Striking the right balance will require input not only from tech companies but also from the industry’s critics, as well as from the industries that AI may transform. While many Americans may have formed their ideas about AI from science fiction movies like The Terminator or The Matrix, it’s important that lawmakers have a clear-eyed understand­ing of the technology, said Rep. Jay Obernolte, R-calif., and the chairman of the House’s AI Task Force.

When lawmakers have questions about AI, Obernolte is one of the people they seek out. He studied engineerin­g and applied science at the California Institute of Technology and earned an M.S. in artificial intelligen­ce at UCLA. The California Republican also started his own video game company. Obernolte said he’s been “very pleasantly impressed” with how seriously his colleagues on both sides of the aisle are taking their responsibi­lity to understand AI.

That shouldn’t be surprising, Obernolte said. After all, lawmakers regularly vote on bills that touch on complicate­d legal, financial, health and scientific subjects. If you think computers are complicate­d, check out the rules governing Medicaid and Medicare.

Keeping up with the pace of technology has challenged Congress since the steam engine and the cotton gin transforme­d the nation’s industrial and agricultur­al sectors. Nuclear power and weaponry is another example of a highly technical subject that lawmakers have had to contend with in recent decades, according to Kenneth Lowande, a University of Michigan political scientist who has studied expertise and how it relates to policy-making in Congress.

Federal lawmakers have created several offices—the Library of Congress, the Congressio­nal Budget Office, etc.—to provide resources and specialize­d input when necessary. They also rely on staff with specific expertise on subject topics, including technology.

Then there’s another, more informal form of education that many members of Congress receive.

“They have interest groups and lobbyists banging down their door to give them briefings,” Lowande said.

Beyer said he’s had a lifelong interest in computers and that when AI emerged as a topic of public interest he wanted to know more. A lot more. Almost all of his fellow students are decades younger; most don’t seem that fazed when they discover their classmate is a congressma­n, Beyer said.

He said the classes, which he fits in around his busy congressio­nal schedule—are already paying off. He’s learned about the developmen­t of AI and the challenges facing the field. He said it’s helped him understand the challenges—biases, unreliable data—and the possibilit­ies, like improved cancer diagnoses and more efficient supply chains.

Beyer is also learning how to write computer code.

“I’m finding that learning to code—which is thinking in this sort of mathematic­al, algorithmi­c step-by-step, is helping me think differentl­y about a lot of other things—how you put together an office, how you work a piece of legislatio­n,” Beyer said.

While a computer science degree isn’t required, it’s imperative that lawmakers understand AI’S implicatio­ns for the economy, national defense, health care, education, personal privacy and intellectu­al property rights, according to Chris Pierson, CEO of the cybersecur­ity firm Blackcloak.

“AI is not good or bad,” said Pierson, who formerly worked in Washington for the Department of Homeland Security. “It’s how you use it.”

The work of safeguardi­ng AI has already begun, though it’s the executive branch leading the way so far. Last month, the White House unveiled new rules that require federal agencies to show their use of AI isn’t harming the public. Under an executive order issued last year, AI developers must provide informatio­n on the safety of their products.

When it comes to more substantiv­e action, America is playing catch-up to the European Union, which recently enacted the world’s first significan­t rules governing the developmen­t and use of AI. The rules prohibit some uses—routine Ai-enabled facial recognitio­n by law enforcemen­t, for one—while requiring other programs to submit informatio­n about safety and public risks. The landmark law is expected to serve as a blueprint for other nations as they contemplat­e their own AI laws.

As the US Congress begins that process, the focus must be on “mitigating potential harm,” said Obernolte, who said he’s optimistic that lawmakers from both parties can find common ground on ways to prevent the worst AI risks.

“Nothing substantiv­e is going to get done that isn’t bipartisan,” he said.

To help guide the conversati­on lawmakers created a new AI task force (Obernolte is co-chairman), as well as an AI Caucus made up of lawmakers with a particular expertise or interest in the topic. They’ve invited experts to brief lawmakers on the technology and its impacts—and not just computer scientists and tech gurus either, but also representa­tives from different sectors that see their own risks and rewards in AI.

Rep. Anna Eshoo is the Democratic chairwoman of the caucus. She represents part of California’s Silicon Valley and recently introduced legislatio­n that would require tech companies and social media platforms like Meta, Google or Tiktok to identify and label Ai-generated deepfakes to ensure the public isn’t misled. She said the caucus has already proved its worth as a “safe place” where lawmakers can ask questions, share resources and begin to craft consensus.

“There isn’t a bad or silly question,” she said. “You have to understand something before you can accept or reject it.”

While a computer science degree isn’t required, it’s imperative that lawmakers understand AI’S implicatio­ns for the economy, national defense, health care, education, personal privacy and intellectu­al property rights, according to Chris Pierson, CEO of the cybersecur­ity firm Blackcloak.

 ?? AP/J. SCOTT APPLEWHITE ?? REP. Don Beyer, D-VA., speaks at the Capitol in Washington on September 9, 2021. To educate themselves on Artificial Intelligen­ce, lawmakers have created a task force and invited experts to explain how AI could transform our lives. Beyer is taking it even further by enrolling in college to get a Master’s degree in machine learning.
AP/J. SCOTT APPLEWHITE REP. Don Beyer, D-VA., speaks at the Capitol in Washington on September 9, 2021. To educate themselves on Artificial Intelligen­ce, lawmakers have created a task force and invited experts to explain how AI could transform our lives. Beyer is taking it even further by enrolling in college to get a Master’s degree in machine learning.

Newspapers in English

Newspapers from Philippines