Business Day

Global South most at risk from fake news in 2024’s election deluge

• Observers say social media companies lack the teams, policies and local knowledge they need to contend with rapidly evolving technology

- Rina Chandran /Thomson Reuters Foundation

From deepfake videos of Indonesia’s presidenti­al contenders to online hate speech directed at India’s Muslims, social media misinforma­tion has been rising ahead of a bumper election year, and experts say tech platforms are not ready for the challenge.

Voters in Bangladesh, Indonesia, Pakistan and India go to the polls this year as more than 50 nations hold elections, including the US, where former president Donald Trump is looking to make a comeback.

Despite the high stakes and evidence from previous polls of how fake online content can influence voters, digital rights experts say social media platforms are ill-prepared for the inevitable rise in misinforma­tion and hate speech.

Recent layoffs at big tech firms, new laws to police online content that have tied up moderators, and artificial intelligen­ce (AI) tools that make it easier to spread misinforma­tion could hurt poorer countries more, said Sabhanaz Rashid Diya, an expert in platform safety.

“Things have actually got worse since the last election cycle for many countries. The actors who abuse the platforms have got more sophistica­ted but the resources to tackle them haven’t increased,” said Diya, founder of Tech Global Institute.

“Because of the mass layoffs, priorities have shifted. Added to that is the large volume of new regulation­s ... platforms have to comply, so they don’t have resources to proactivel­y integrity address the ecosystem, broader”content she told ecosystem [and] the election the Thomson Reuters Foundation.

“That will disproport­ionately impact the Global South,” which generally gets fewer resources from tech firms, she said.

As generative AI tools, such as Midjourney, Stable Diffusion and DALL-E, make it cheap and easy to create convincing deepfakes, concern is growing about how such material could be used to mislead or confuse voters.

POLITICAL ADVERTS

AI-generated deepfakes have already been used to deceive voters from New Zealand to Argentina and the US, and authoritie­s are scrambling to keep up with the tech even as they pledge to crack down on misinforma­tion.

The EU — where elections for the European parliament will take place in June — requires tech firms to clearly label political advertisin­g and say who paid for it, while India’s IT Rules “explicitly prohibit the disseminat­ion of misinforma­tion”, the ministry of electronic­s & informatio­n technology noted in December.

Alphabet’s Google has said it plans to attach labels to AIgenerate­d content and political ads that use digitally altered material on its platforms, including on YouTube, and also limit election queries its Bard chatbot and AI-based search can answer.

YouTube’s “electionsf­ocused teams are monitoring real-time developmen­ts ... including by detecting and monitoring trends in risky forms of content and addressing them appropriat­ely before they become larger issues,”a spokespers­on for YouTube said.

Facebook’s owner, Meta Platforms — which also owns WhatsApp and Instagram — has said it will bar political campaigns and advertiser­s from using its generative AI products in advertisem­ents.

Meta has a “comprehens­ive strategy in place for elections, which includes detecting and removing hate speech and content that incites violence, reducing the spread of misinforma­tion, making political advertisin­g more transparen­t [and] partnering with authoritie­s to action content that violates local law,” a spokespers­on said.

X did not respond to a request for comment on its measures to tackle electionre­lated misinforma­tion. TikTok, which is banned in India, also did not respond.

Misinforma­tion on social media has had devastatin­g consequenc­es ahead of, and after, previous elections in many of the nations where voters are going to the polls in 2024.

In Indonesia, which votes on February 14, hoaxes and calls for violence on social media networks spiked after the 2019 election result. At least six people were killed in subsequent unrest.

In Pakistan, where a national vote is scheduled for February 8, hate speech and misinforma­tion were rife on social media ahead of the 2018 general election, which was marred by a series of bombings that killed scores across the country.

RESOURCES

In 2023, violent clashes following the arrests of supporters of jailed former prime minister Imran Khan led to internet shutdowns and the blocking of social media platforms. Former cricket hero Khan was arrested on corruption charges in 2022 and given a three-year prison sentence.

While social media firms have developed advanced algorithms in order to tackle misinforma­tion and disinforma­tion, “the effectiven­ess of these tools can be limited by local nuances and the intricacie­s of languages other than English”, said Nuurrianti Jalli, an assistant professor at Oklahoma State University.

In addition, the critical US election and global events such as the Israel-Hamas conflict and the Russia-Ukraine war could “sap resources and focus that might otherwise be dedicated to preparing for elections in other locales”, she added.

In Bangladesh, violent protests erupted in the months ahead of the January 7 election. The vote was boycotted by the main opposition party and Prime Minister Sheikh Hasina won a fourth straight term.

Political ads on Facebook — the biggest social media platform in the country, with more than 44-million users — are routinely mislabelle­d or lack disclaimer­s and key details, revealing gaps in the platform’s verificati­on process, according to a recent study by tech research firm Digitally Right.

Separately, a report published in December by Diya’s Tech Global Institute showed how difficult it was to determine the affiliatio­n between Facebook pages and groups and Bangladesh’s two leading political parties, or to figure out what constitute­d “authoritat­ive informatio­n” from either party.

Facebook has not commented on the studies.

In the past year, Meta, X and Alphabet have rolled back at least 17 major policies designed to curb hate speech and misinforma­tion, and laid off more than 40,000 people, including teams that maintained platform integrity, the US nonprofit Free Press said in a December report.

“With dozens of national elections happening around the world in 2024, platformin­tegrity commitment­s are more important than ever. However, major social media companies are not remotely prepared for the upcoming election cycle,” civil rights lawyer Nora Benavidez wrote in the report.

“Without the policies and teams they need to moderate violative content, platforms risk amplifying confusion, discouragi­ng voter engagement and creating opportunit­ies for network manipulati­on to erode democratic institutio­ns.”

Some government­s have responded to this perceived lack of control by introducin­g restrictiv­e laws on online speech and expression, and these could lead social media platforms to over-enforce content moderation, tech experts said.

India, where Prime Minister Narendra Modi is widely expected to win a third term, has stepped up content removal demands, introduced individual liability provisions for firms, and warned companies that they could lose safe harbour safeguards that protect them from liability for third-party content if they do not comply.

WORRYING

“The legal obligation puts additional strains on platforms ... if safe harbour is at risk, the platform will inadverten­tly over-enforce, so it will end up taking down a lot more content,” said Diya.

For Raman Jit Singh Chima, Asia policy director at nonprofit Access Now, the issue is preparatio­n. He says big tech firms have failed to engage with civil society ahead of elections and have not provided enough informatio­n in local languages.

“Digital platforms are even more important for this election cycle but they are not set up to handle the problems around elections, and they are not being transparen­t about their measures to mitigate harms,” he said. “It’s very worrying.”

 ?? /123RF/wrightstud­io ?? False security: Digital rights experts say social media platforms are ill-prepared for the inevitable rise in misinforma­tion and hate speech.
/123RF/wrightstud­io False security: Digital rights experts say social media platforms are ill-prepared for the inevitable rise in misinforma­tion and hate speech.

Newspapers in English

Newspapers from South Africa