Los Gatos Weekly Times
Stop AI disinformation before the 2024 election campaigns
As an editorial page editor, I encounter a wide range of issues capable of keeping me up at night.
Climate change. Homelessness. The economy.
But nothing causes me as much angst as the potential for misinformation and disinformation fueled by artificial intelligence to disrupt the 2024 election.
We may not be doing enough to fight climate change, homelessness and the economic downturn, but at least local, state and federal lawmakers have passed legislation designed to make things better.
I have little hope that anything will be done to combat the onslaught of fake news created by generative AI that is about to flood social media, where an everincreasing number of Americans will get their news during the next election cycle.
It will be a black mark on Silicon Valley, the center of the technology industry. Who else to
blame if political disinformation creates more distrust in our elections and further erodes our democracy? Artificial intelligence is clearly the next big thing in tech. How can AI reach its full potential if it is seen as causing more harm than benefit, if it makes it impossible to determine what is true and what is false?
I know, I know. People in the industry are aware of the challenges and have urged lawmakers to regulate AI before it gets out of hand.
The Biden administration calls addressing the effects and future of AI a “top priority” for the president.
“We need to manage the risks to our society, our economy and national security,” Biden said June 20 in a Bay Area visit. “My administration is committed to safe
guarding American rights and safety, to protecting privacy, to addressing bias and misinformation, to making sure the systems are safe before they are released.”
Rep. Ro Khanna, the Bay Area's leading congressmember on tech issues, agrees. “There is an urgent risk of disinformation with AI,” he said June 21.
Khanna backs the effort by Democrats in the House of Representatives to appoint a 15-person commission of ethicists, technologists, journalists and civic leaders and task
them with issuing recommendations for regulations in the next six months.
“I believe those recommendations can inform decisive and substantive regulations,” Khanna said.
But Republicans, while having their own set of issues with AI, are nowhere near coming to a bipartisan agreement with Democrats.
Congress has a shoddy record of regulating tech. When it comes to actually reining in misinformation and disinformation campaigns on social media, no action has been taken, despite more than a decade of abuses.
Congress should have acted after the 2018 Cambridge Analytica scandal, in which Facebook
allowed harvesting the data of 87 million people without their consent. Cambridge Analytica worked for the Trump presidential campaign and used the data to try to influence elections. That was the moment when laws should have been enacted making social media platforms liable for spreading misinformation and disinformation during political campaigns.
I have long held that companies hosting platforms should be responsible for the material published on their sites in the same way that newspapers are held liable for the material they publish. It's the price companies pay for being in the news business. In the same vein, if an
AI product creates content, then the company that hosts the platform should be held responsible for that information.
I am a strong defender of the First Amendment, but it's not absolute. We have already seen that misinformation and disinformation during political campaigns can be a dangerous threat to the foundation of our nation's democracy. If regulations to address those threats limit artificial intelligence's ability to reach its full potential, so be it.