Los Angeles Times

Preventing a voter disinforma­tion crisis

AI is turbocharg­ing disinforma­tion, especially in communitie­s of color. The state must step in.

- By Bill Wong and Mindy Romero

As the general election campaign begins in earnest, we can expect disinforma­tion attacks to target voters, especially in communitie­s of color. This has happened before: In 2016, for example, Russia’s disinforma­tion programs zeroed in on Black Americans, creating Instagram and Twitter accounts that masquerade­d as Black voices and producing fake news websites such as blacktivis­t.info, blacktoliv­e.org and blacksoul.us.

Advances in technology will make these efforts harder to recognize. Envision those same fake accounts and websites featuring hyper-realistic videos and images intended to sow racial division and mislead people about their voting rights. With the advent of generative artificial intelligen­ce, that is possible at little to no cost, turbocharg­ing the kind of disinforma­tion that has always targeted communitie­s of color.

It’s a problem for candidates, election offices and voter outreach groups in the months ahead. But voters themselves will ultimately have to figure out what is real news or fake news, authentic or AI-generated.

For immigrants and communitie­s of color — who often face language barriers, distrust democratic systems and lack technology access — the challenge is likely to be more significan­t. Across the nation, and especially in states such as California with large communitie­s of immigrants and people with limited knowledge of English, the government needs to help these groups identify and avoid disinforma­tion,

Asian Americans and Latinos are particular­ly vulnerable. About two-thirds of the Asian American and Pacific Islander population are immigrants, and a Pew Research Center report states that “[86%] of Asian immigrants 5 and older say they speak a language other than English at home.” The same dynamics hold true for Latinos: Only 38% of the U.S. foreignbor­n Latino population reports being proficient in English.

Targeting non-English-speaking communitie­s has several advantages for those who would spread disinforma­tion. These groups are often cut off from mainstream news sources that have the greatest resources to debunk deepfakes and other disinforma­tion, preferring online engagement in their native languages, where moderation and fact-checking are less prevalent.

Forty-six percent of Latinos in the U.S. use WhatsApp, while many Asian Americans prefer WeChat. Wired magazine reported that the platform “is used by millions of Chinese Americans and people with friends, family, or business in China, including as a political organizing tool.”

Disinforma­tion aimed at immigrant communitie­s is poorly understood and difficult to track and counteract, yet it is getting easier and easier to create. In the past, producing false content in non-English languages required intensive work from humans and was often low in quality. Now, AI tools can create hard-to-track, inlanguage disinforma­tion at lightning speed and without the vulnerabil­ities and scaling problems posed by human limitation­s. Despite this, much research on misinforma­tion and disinforma­tion concentrat­es on English-language uses.

Attempts to target communitie­s of color and non-English speakers with disinforma­tion are aided by many immigrants’ heavy reliance on their mobile phones for internet access. Mobile user interfaces are particular­ly vulnerable to disinforma­tion because many desktop design and branding elements are minimized in favor of content on smaller screens. With 13% of Latinos and 12% of African Americans dependent on mobile devices for broadband access, in contrast to 4% of white smartphone owners, they are more likely to receive — and share — false informatio­n.

Social media companies’ past efforts to counter voter disinforma­tion have fallen short. Meta’s February announceme­nt that it would flag AI-generated images on Facebook, Instagram and Threads is a positive but minor step toward stemming AI-generated disinforma­tion, especially for ethnic and immigrant communitie­s who may know little about its effects. Clearly, a stronger government response is needed.

The California Initiative for Technology and Democracy, or CITED, where we serve on the board of directors, will soon unveil a legislativ­e package that would require broader transparen­cy for generative AI content, making sure users of social media know what video, audio and images were made by AI tools. The bills would also require labeling of AI-assisted political disinforma­tion on social media, prohibit campaign ads close to an election from using the technology and restrict anonymous trolls and bots.

In addition, CITED plans to hold a series of community forums around California with partner organizati­ons rooted in their regions. The groups will speak directly to leaders in communitie­s of color, labor leaders, local elected officials and other trusted messengers about the dangers of false AI-generated informatio­n likely to be circulatin­g this election season.

The hope is that this informatio­n will be relayed at the community level, making voters in the state more aware and skeptical of false or misleading content, building trust in the election process, election results and our democracy.

Bill Wong is a campaign strategist and the author of “Better to Win: Hardball Lessons in Leadership, Influence, & the Craft of Politics.” Mindy Romero is a political sociologis­t and the director of the Center for Inclusive Democracy at the USC Price School of Public Policy.

Newspapers in English

Newspapers from United States