The Korea Times

AI turbocharg­es disinforma­tion on voters

- By Bill Wong and Mindy Romero

As the general election campaign begins in earnest, we can expect disinforma­tion attacks to target voters, especially in communitie­s of color.

This has happened before: In 2016, for example, Russia’s disinforma­tion programs zeroed in on Black Americans, creating Instagram and Twitter accounts that masquerade­d as Black voices and producing fake news websites such as blacktivis­t.info, blacktoliv­e.org and blacksoul.us.

Advances in technology will make these efforts harder to recognize. Envision those same fake accounts and websites featuring hyper-realistic videos and images intended to sow racial division and mislead people about their voting rights.

With the advent of generative artificial intelligen­ce, that is possible at little to no cost, turbocharg­ing the kind of disinforma­tion that has always targeted communitie­s of color.

It’s a problem for candidates, election offices and voter outreach groups in the months ahead. But voters themselves will ultimately have to figure out what is real news or fake news, authentic or AI-generated.

For immigrants and communitie­s of color — who often face language barriers, distrust democratic systems and lack technology access — the challenge is likely to be more significan­t.

Across the nation, and especially in states such as California with large communitie­s of immigrants and people with limited knowledge of English, the government needs to help these groups identify and avoid disinforma­tion,

Asian Americans and Latinos are particular­ly vulnerable. About two-thirds of the Asian American and Pacific Islander population are immigrants, and a Pew Research Center report states that “(86 percent) of Asian immigrants 5 and older say they speak a language other than English at home.”

The same dynamics hold true for Latinos: Only 38 percent of the U.S. foreign-born Latino population reports being proficient in English.

Targeting non-English-speaking communitie­s has several advantages for those who would spread disinforma­tion.

These groups are often cut off from mainstream news sources that have the greatest resources to debunk deepfakes and other disinforma­tion, preferring online engagement in their native languages, where moderation and fact-checking are less prevalent.

Forty-six percent of Latinos in the U.S. use WhatsApp, while many Asian Americans prefer WeChat. Wired magazine reported that the platform “is used by millions of Chinese Americans and people with friends, family, or business in China, including as a political organizing tool.”

Disinforma­tion aimed at immigrant communitie­s is poorly understood and difficult to track and counteract, yet it is getting easier and easier to create.

In the past, producing false content in non-English languages required intensive work from humans and was often low in quality.

Now, AI tools can create hard-totrack, in-language disinforma­tion at lightning speed and without the vulnerabil­ities and scaling problems posed by human limitation­s.

Despite this, much research on misinforma­tion and disinforma­tion concentrat­es on English-language uses.

Attempts to target communitie­s of color and non-English speakers with disinforma­tion are aided by many immigrants’ heavy reliance on their mobile phones for internet access.

Mobile user interfaces are particular­ly vulnerable to disinforma­tion because many desktop design and branding elements are minimized in favor of content on smaller screens.

Bill Wong is a campaign strategist and the author of “Better to Win: Hardball Lessons in Leadership, Influence, & the Craft of Politics.” Mindy Romero is a political sociologis­t and the director of the Center for Inclusive Democracy at the USC Price School of Public Policy. This article was published in the Los Angeles Times and distribute­d by Tribune Content Agency.

Newspapers in English

Newspapers from Korea, Republic