AI turbocharges disinformation on voters
As the general election campaign begins in earnest, we can expect disinformation attacks to target voters, especially in communities of color.
This has happened before: In 2016, for example, Russia’s disinformation programs zeroed in on Black Americans, creating Instagram and Twitter accounts that masqueraded as Black voices and producing fake news websites such as blacktivist.info, blacktolive.org and blacksoul.us.
Advances in technology will make these efforts harder to recognize. Envision those same fake accounts and websites featuring hyper-realistic videos and images intended to sow racial division and mislead people about their voting rights.
With the advent of generative artificial intelligence, that is possible at little to no cost, turbocharging the kind of disinformation that has always targeted communities of color.
It’s a problem for candidates, election offices and voter outreach groups in the months ahead. But voters themselves will ultimately have to figure out what is real news or fake news, authentic or AI-generated.
For immigrants and communities of color — who often face language barriers, distrust democratic systems and lack technology access — the challenge is likely to be more significant.
Across the nation, and especially in states such as California with large communities of immigrants and people with limited knowledge of English, the government needs to help these groups identify and avoid disinformation,
Asian Americans and Latinos are particularly vulnerable. About two-thirds of the Asian American and Pacific Islander population are immigrants, and a Pew Research Center report states that “(86 percent) of Asian immigrants 5 and older say they speak a language other than English at home.”
The same dynamics hold true for Latinos: Only 38 percent of the U.S. foreign-born Latino population reports being proficient in English.
Targeting non-English-speaking communities has several advantages for those who would spread disinformation.
These groups are often cut off from mainstream news sources that have the greatest resources to debunk deepfakes and other disinformation, preferring online engagement in their native languages, where moderation and fact-checking are less prevalent.
Forty-six percent of Latinos in the U.S. use WhatsApp, while many Asian Americans prefer WeChat. Wired magazine reported that the platform “is used by millions of Chinese Americans and people with friends, family, or business in China, including as a political organizing tool.”
Disinformation aimed at immigrant communities is poorly understood and difficult to track and counteract, yet it is getting easier and easier to create.
In the past, producing false content in non-English languages required intensive work from humans and was often low in quality.
Now, AI tools can create hard-totrack, in-language disinformation at lightning speed and without the vulnerabilities and scaling problems posed by human limitations.
Despite this, much research on misinformation and disinformation concentrates on English-language uses.
Attempts to target communities of color and non-English speakers with disinformation are aided by many immigrants’ heavy reliance on their mobile phones for internet access.
Mobile user interfaces are particularly vulnerable to disinformation because many desktop design and branding elements are minimized in favor of content on smaller screens.
Bill Wong is a campaign strategist and the author of “Better to Win: Hardball Lessons in Leadership, Influence, & the Craft of Politics.” Mindy Romero is a political sociologist and the director of the Center for Inclusive Democracy at the USC Price School of Public Policy. This article was published in the Los Angeles Times and distributed by Tribune Content Agency.