The Guardian (USA)

Facebook and Instagram to label digitally altered content ‘made with AI’

- Reuters

Meta, owner of Facebook and Instagram, announced major changes to its policies on digitally created and altered media on Friday, before elections poised to test its ability to police deceptive content generated by artificial intelligen­ce technologi­es.

The social media giant will start applying “Made with AI” labels in May to AI-generated videos, images and audio posted on Facebook and Instagram, expanding a policy that previously addressed only a narrow slice of doctored videos, the vice-president of content policy, Monika Bickert, said in a blogpost.

Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particular­ly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools. Meta will begin applying the more prominent “high-risk” labels immediatel­y, a spokespers­on said.

The approach will shift the company’s treatment of manipulate­d content, moving from a focus on removing a limited set of posts toward keeping the content up while providing viewers with informatio­n about how it was made.

Meta previously announced a scheme to detect images made using other companies’ generative AI tools by using invisible markers built into the files, but did not give a start date at the time.

A company spokespers­on said the labeling approach would apply to content posted on Facebook, Instagram and Threads. Its other services, including WhatsApp and Quest virtualrea­lity headsets, are covered by different rules.

The changes come months before a US presidenti­al election in November that tech researcher­s warn may be transforme­d by generative AI technologi­es. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers like Meta and generative AI market leader OpenAI.

In February, Meta’s oversight board called the company’s existing rules on manipulate­d media “incoherent” after reviewing a video of Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest the US president had behaved inappropri­ately.

The footage was permitted to stay up, as Meta’s existing “manipulate­d media” policy bars misleading­ly altered videos only if they were produced by artificial intelligen­ce or if they make people appear to say words they never actually said.

The board said the policy should also apply to non-AI content, which is “not necessaril­y any less misleading” than content generated by AI, as well as to audio-only content and videos depicting people doing things they never actually said or did.

 ?? Photograph: Sébastien Bozon/AFP/Getty Images ?? ‘The approach will shift the company’s treatment of manipulate­d content.’
Photograph: Sébastien Bozon/AFP/Getty Images ‘The approach will shift the company’s treatment of manipulate­d content.’

Newspapers in English

Newspapers from United States