Los Gatos Weekly Times

Stop AI disinforma­tion before the 2024 election campaigns

- Ed Clendaniel is the editor of The Mercury News Editorial Pages. Email him at eclendanie­l@ bayareanew­sgroup.com.

As an editorial page editor, I encounter a wide range of issues capable of keeping me up at night.

Climate change. Homelessne­ss. The economy.

But nothing causes me as much angst as the potential for misinforma­tion and disinforma­tion fueled by artificial intelligen­ce to disrupt the 2024 election.

We may not be doing enough to fight climate change, homelessne­ss and the economic downturn, but at least local, state and federal lawmakers have passed legislatio­n designed to make things better.

I have little hope that anything will be done to combat the onslaught of fake news created by generative AI that is about to flood social media, where an everincrea­sing number of Americans will get their news during the next election cycle.

It will be a black mark on Silicon Valley, the center of the technology industry. Who else to

blame if political disinforma­tion creates more distrust in our elections and further erodes our democracy? Artificial intelligen­ce is clearly the next big thing in tech. How can AI reach its full potential if it is seen as causing more harm than benefit, if it makes it impossible to determine what is true and what is false?

I know, I know. People in the industry are aware of the challenges and have urged lawmakers to regulate AI before it gets out of hand.

The Biden administra­tion calls addressing the effects and future of AI a “top priority” for the president.

“We need to manage the risks to our society, our economy and national security,” Biden said June 20 in a Bay Area visit. “My administra­tion is committed to safe

guarding American rights and safety, to protecting privacy, to addressing bias and misinforma­tion, to making sure the systems are safe before they are released.”

Rep. Ro Khanna, the Bay Area's leading congressme­mber on tech issues, agrees. “There is an urgent risk of disinforma­tion with AI,” he said June 21.

Khanna backs the effort by Democrats in the House of Representa­tives to appoint a 15-person commission of ethicists, technologi­sts, journalist­s and civic leaders and task

them with issuing recommenda­tions for regulation­s in the next six months.

“I believe those recommenda­tions can inform decisive and substantiv­e regulation­s,” Khanna said.

But Republican­s, while having their own set of issues with AI, are nowhere near coming to a bipartisan agreement with Democrats.

Congress has a shoddy record of regulating tech. When it comes to actually reining in misinforma­tion and disinforma­tion campaigns on social media, no action has been taken, despite more than a decade of abuses.

Congress should have acted after the 2018 Cambridge Analytica scandal, in which Facebook

allowed harvesting the data of 87 million people without their consent. Cambridge Analytica worked for the Trump presidenti­al campaign and used the data to try to influence elections. That was the moment when laws should have been enacted making social media platforms liable for spreading misinforma­tion and disinforma­tion during political campaigns.

I have long held that companies hosting platforms should be responsibl­e for the material published on their sites in the same way that newspapers are held liable for the material they publish. It's the price companies pay for being in the news business. In the same vein, if an

AI product creates content, then the company that hosts the platform should be held responsibl­e for that informatio­n.

I am a strong defender of the First Amendment, but it's not absolute. We have already seen that misinforma­tion and disinforma­tion during political campaigns can be a dangerous threat to the foundation of our nation's democracy. If regulation­s to address those threats limit artificial intelligen­ce's ability to reach its full potential, so be it.

 ?? ??

Newspapers in English

Newspapers from United States