Jamaica Gleaner

Tech companies sign accord to combat AI-generated election trickery

-

MAJOR TECHNOLOGY companies signed a pact Friday to voluntaril­y adopt “reasonable precaution­s” to prevent artificial intelligen­ce tools from being used to disrupt democratic elections around the world.

Tech executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberate­ly trick voters. Twelve other companies – including Elon Musk’s X – are also signing on to the accord.

“Everybody recognises that no one tech company, no one government, no one civil society organisati­on is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.

The accord is largely symbolic, but targets increasing­ly realistic AI-generated images, audio and video “that deceptivel­y fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholde­rs in a democratic election, or that provide false informatio­n to voters about when, where, and how they can lawfully vote”.

The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distribute­d on their platforms. It notes the companies will share best practices with each other and provide “swift and proportion­ate responses” when that content starts to spread.

The vagueness of the commitment­s and lack of any binding requiremen­ts likely helped win over a diverse swath of companies, but may disappoint prodemocra­cy activists and watchdogs looking for stronger assurances.

“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due and acknowledg­e that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary and we’ll be keeping an eye on whether they follow through.”

Clegg said each company “quite rightly has its own set of content policies”.

“This is not attempting to try to impose a straitjack­et on everybody,” he said. “And, in any event, no one in the industry thinks that you can deal with a whole new technologi­cal paradigm by sweeping things under the rug and trying to play whacka-mole and finding everything that you think may mislead someone.”

The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Some have already done so, including Bangladesh, Taiwan, Pakistan, and most recently, Indonesia.

Attempts at AI-generated election interferen­ce have already begun, such as when AI robocalls that mimicked United States (US) President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.

Just days before Slovakia’s elections in November, AI-generated audio recordings impersonat­ed a liberal candidate discussing plans to raise beer prices and rig the election. Factchecke­rs scrambled to identify them as false, but they were already widely shared as real across social media.

Politician­s and campaign committees also have experiment­ed with the technology, from using AI chatbots to communicat­e with voters to adding AI-generated images to ads.

Ahead of Indonesia’s election, the leader of a political party shared a video cloning the face and voice of the deceased dictator Suharto. The post on X disclosed the video was generated by AI, but some online critics called it a misuse of AI tools to intimidate and sway voters.

Friday’s accord said, in responding to AI-generated deepfakes, platforms “will pay attention to context and in particular to safeguardi­ng educationa­l, documentar­y, artistic, satirical, and political expression”.

It said the companies will focus on transparen­cy to users about their policies on deceptive AI election content and work to educate the public about how they can avoid falling for AI fakes.

Many of the companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure from regulators and others to do more.

That pressure is heightened in the US, where Congress has yet to pass laws regulating AI in politics, leaving AI companies to largely govern themselves. In the absence of federal legislatio­n, many states are considerin­g ways to put guard rails around the use of AI in elections and other applicatio­ns.

The Federal Communicat­ions Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisem­ents.

Misinforma­tion experts warn that while AI deepfakes are especially worrisome for their potential to fly under the radar and influence voters this year, cheaper and simpler forms of misinforma­tion remain a major threat. The accord noted this too, acknowledg­ing that “traditiona­l manipulati­ons (‘cheapfakes’) can be used for similar purposes”.

Many social media companies already have policies in place to deter deceptive posts about electoral processes — AI-generated or not. For example, Meta says it removes misinforma­tion about “the dates, locations, times, and methods for voting, voter registrati­on, or census participat­ion” as well as other false posts meant to interfere with someone’s civic participat­ion.

In addition to the major platforms that helped broker Friday’s agreement, other signatorie­s include chatbot developers Anthropic and Inflection AI; voice-clone start-up ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.

Notably absent from the accord is another popular AI image-generator, Midjourney. The San Franciscob­ased start-up didn’t immediatel­y return a request for comment on Friday.

The inclusion of X – not mentioned in an earlier announceme­nt about the pending accord – was one of the biggest surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free speech absolutist”.

But, in a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibi­lity to safeguard free and fair elections”.

“X is dedicated to playing its part, collaborat­ing with peers to combat AI threats while also protecting free speech and maximising transparen­cy,” she said.

 ?? AP ?? Meta’s President of Global Affairs Nick Clegg speaking at the World Economic Forum in Davos, Switzerlan­d, on January 18, 2024. Adobe, Google, Meta, Microsoft, OpenAI, TikTok, and other companies announced a new voluntary framework for how they will respond to AI-generated deepfakes that deliberate­ly trick voters, at the Munich Security Conference in Germany on Friday.
AP Meta’s President of Global Affairs Nick Clegg speaking at the World Economic Forum in Davos, Switzerlan­d, on January 18, 2024. Adobe, Google, Meta, Microsoft, OpenAI, TikTok, and other companies announced a new voluntary framework for how they will respond to AI-generated deepfakes that deliberate­ly trick voters, at the Munich Security Conference in Germany on Friday.

Newspapers in English

Newspapers from Jamaica