Sunday Independent (Ireland)

Steve Dempsey When the news just confirms your opinions, you must take extra care

-

It’s a big year for democracy. Around two billion people are eligible to vote in a host of elections across the globe in 2024. These days, elections mean misinforma­tion, plus handwringi­ng by social media companies, legislator­s and media outlets about how this misinforma­tion spreads.

Thankfully, we’re no longer in denial about the scale of the problem.

Meta CEO Mark Zuckerberg initially dismissed the notion that misinforma­tion on Facebook affected voters as “crazy”. Though he later admitted his mistake.

“Calling that ‘crazy’ was dismissive and I regret it,” he subsequent­ly said.

There was plenty of evidence that pointed to the dangers of misinforma­tion on social media around elections. Russia’s Internet Research Agency was found to be advocating for Donald Trump as a presidenti­al candidate since December 2015.

There were plenty of bad actors in it for the money.

For example, Macedonian teenagers realised in 2016 they could post sensationa­list fake stories on social media – and make a fortune via pay-per-click advertisin­g.

Highlights include the Pope endorsing Trump and (ironically, in retrospect) a tale outlining how Barack Obama would refuse to leave the White House when his term ended.

But this year’s hand-wringing has gone up a notch.

Why? Because AI now has the potential to supercharg­e ideologica­l and mercenary misinforma­tion campaigns.

There are new concerns about deepfakes. There was a recent AI-generated audio of Joe Biden dissuading people from voting in the New Hampshire primaries. It was a fake. And Rishi Sunak has been impersonat­ed in a range of fake ads on Facebook.

A video of Volodymr Zelensky calling for his troops to surrender, days into the Russian invasion of Ukraine, was also shown to be fake.

To make matters more confusing, deepfakes aren’t just being used by political opponents to sow confusion.

In India, political parties themselves are using deepfakes and AI to bolster their own campaigns. A video of a member of parliament for the ruling BJP talking to an audience in Hindi was doctored using AI to make three videos – the original in Hindi, and others in Haryanvi and English, for audiences who speak those languages.

But perhaps the biggest risk around AI isn’t the doctoring of videos and images, rather it is the creation of bogus political stories at a scale previously unimaginab­le. A handful of Macedonian teenagers were dangerous: a handful of Macedonian teenagers with Chat GPT could be lethal.

How will we combat such a technology risk? With technology of course!

Newsguard, a company that

identifies misinforma­tion sources and narratives, has just announced a suite of services designed to spot election-related AI-generated false informatio­n.

The company’s data and tools are available publicly to be licensed by the large-language AI models and others building their own AI tools who want to avoid creating or spreading misinforma­tion.

“Elections in democracie­s were already targeted by misinforma­tion, including from Russia and China, in the pre-AI era,” says Gordon Crovitz, co-CEO of NewsGuard, and former publisher of the Wall Street Journal.

“We now have the AI-enhanced internet, empowering malign actors on behalf of Russia, China and Iran to use AI to create persuasive and entirely false claims. Voters and news consumers, beware!”

Crovitz is right. We need to be cautious. But perhaps the risks around misinforma­tion and elections are getting dragged into the AI hype cycle.

There’s a fortune to be made making frothy promises about the economic potential AI can unlock, or dire prediction­s about the risks it could pose. Every second business story I read these days seems to fall into one of these categories.

Yes, an AI-enhanced internet allows malign actors produce more content that’s designed to be more divisive, not to mention more confusing in nature.

AI is rocket fuel, but the primary problem remains the distributi­on mechanisms of social media services, which are designed to increase users’ engagement on the platform, not their engagement in civic society.

Audiences need to exercise healthy suspicion when using the AI powered internet, especially when they encounter ‘news’ that entrenches their own prejudices. And there’s no shortage of those type of stories.

Rachel Coldicutt, a British technology expert, identified the problem. In a tweet she said: “Kate Middletong­ate happening at the start of the year of more than 60 elections is, if nothing else, a real- time testbed for the public’s media literacy.

“As long as platforms keep serving up content that reinforces people’s interests, we have a problem.” ‘Kate Middletong­ate is a testbed for the public’s media literacy’

 ?? ??
 ?? ??

Newspapers in English

Newspapers from Ireland