Sunday Times

Meta takes on election fraudsters

Social media giant briefs SA’s Electoral Commission about co-ordinated battle to stem misinforma­tion

- By ARTHUR GOLDSTUCK

● Meta, the owner of the world’s biggest social media platforms, has engaged directly with the Electoral Commission (IEC) and parliament to ensure Facebook, Instagram and WhatsApp aren’t misused to manipulate this year’s elections.

Nick Clegg, former deputy prime minister of the UK and now president of global affairs at Meta, was in South Africa this week for a briefing and told Business Times in an exclusive interview that Meta had given the IEC extensive guidance on using its social media tools most effectivel­y.

“We’ve done a significan­t amount of training with the Electoral Commission, including how they should use their WhatsApp bot to communicat­e with South Africans and give reliable informatio­n about the elections. But also with the parties and with committees in parliament. We’ve done a number of briefings on our election preparedne­ss, and explained to them how our tools work,” Clegg said.

“They [the IEC and parliament] found it very helpful to understand exactly what tools we have in place, what teams we have in place, and how those teams draw on multiple domains in the company: legal, policy, engineerin­g, and product.”

The biggest challenge for Meta is that 2024 will see the biggest number of national elections in history.

“The thing that is new this year isn’t only the scale of the elections taking place but the nature of the technology which now might be brought to bear,” Clegg said.

“In other words, generative AI. On that, we’ve done a considerab­le amount of work. I spend probably the bulk of my time at the moment on exactly that: how do we make sure that we have the right guardrails in place, given this technology is so new.”

He said Meta had entered into a voluntary agreement with other major social and content platforms to tackle misinforma­tion. “We’ve invested a vast amount of resources and a very significan­t amount of time. We have teams working around the clock. We analyse how our platforms are used in each election, and then what kind of vector of abuse — disinforma­tion, misinforma­tion and so on — there can be. And then we allocate resources accordingl­y.

“We do an extensive amount of analysis on what role our apps play because countries use our apps in slightly different ways. In some countries, most people use WhatsApp, but don’t use Messenger. In other countries, lots of people use Messenger and none use WhatsApp. Other countries use Instagram more than Facebook, and so on.”

One of the keys to putting checks in place and covering as many bases as possible is that Meta isn’t trying to achieve such safeguards on its own.

“We realised we just can’t do it on our own. One of the things that we have done, especially for these election cycles, we leaned in to cross-industry co-operation, to make sure that we’re ready for all the major elections,” Clegg said.

The co-operation will be focused on technology tools designed to spot fake content produced by artificial intelligen­ce (AI) as well as misinforma­tion. “You can’t control or regulate something you can’t identify in the first place. Identifyin­g the origin, the provenance, and being able to detect the genesis of synthetic content is really quite important,” Clegg said.

“Here’s the dilemma: If you use our AI image generation tool, Imagine, and produce a synthetic image, because it’s ours we will put a visible watermark on the bottom left-hand corner to make it very clear that it’s AI. Any user can see that it’s been synthetica­lly produced. However ... in the relatively recent past, Stability AI, for instance, didn’t have any visible or, indeed, invisible watermarks.

“Let’s say you use their tools to generate the image, and then share it on Instagram and Facebook. In technical terms, we are ingesting it. What happens if there’s no detector, no invisible watermark that allows our system to say, ‘Aha, that’s a synthetic piece of content, we want to flag that for our users’?”

“So I have teams who are working flat out. Myself and my opposite numbers from Microsoft and Google and other companies have signed an agreement on this work, and other related work, to deal with the risk of deep fakes and elections and so on.”

The agreement was signed by 20 technology companies at the 60th annual Munich Security Conference on February 16. Attended by 45 heads of state, along with ministers and representa­tives from business, the media, academia, and civil society, the event debated pressing issues of internatio­nal security policy.

While the conference focused on major conflicts, it also considered the risks of AI to democracy as a “shared challenge” in a “super election year”.

The companies agreed to jointly prevent deceptive AI content from interferin­g with this year’s elections globally. The accord was signed, among others, by Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X. All pledged to work together to detect and counter harmful AI content.

According to a statement issued by the conference, the accord “is a set of commitment­s to deploy technology countering harmful AI-generated content meant to deceive voters”.

“Signatorie­s pledge to work collaborat­ively on tools to detect and address online distributi­on of such AI content, drive educationa­l campaigns, and provide transparen­cy, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem.”

Christoph Heusgen, chair of the Munich Security Conference, said: “Elections are the beating heart of democracie­s. The Tech Accord ... is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworth­y tech practices.”

Clegg said they were working hard on common or interopera­ble standards of detection, provenance and watermarki­ng.

“Candidly, when it comes to video and audio, it is a lot more complex to do that. And there’s a whole separate issue: what do you do when an image is screenshot and cropped and so on. Our AI research lab is doing a lot of work to try to develop tools of detection provenance, which would be entirely immune, which wouldn’t even require any invisible watermark. So we’re doing a lot of partnershi­p work.”

So far, elections that have taken place have seen little manipulati­on through hidden use of AI, he said.

“I really don’t want to say this with a hint of complacenc­y. It can change from one minute to the next. But so far, in those elections which have taken place, there has been the use of these AI tools but not nearly on the kind of society-wide election-disrupting scale that we might have feared.

“The key thing is you can’t sweep this technology under the carpet. The internet is going to be populated with either synthetic or hybrid content on such a scale soon you are clearly not going to be able to play whack-a-mole with every single piece of content. But when it comes to elections, certainly for this year, given the technology is so nascent, this high level of industry co-operation is promising.”

Newspapers in English

Newspapers from South Africa