San Francisco Chronicle

Meta unveils plan to label AI images on its social sites

- By Chase DiFelician­tonio Reach Chase DiFelician­tonio: chase.difelician­tonio@ sfchronicl­e.com; Twitter: @ChaseDiFel­ice

As generating images with artificial intelligen­ce gets easier and better, determinin­g which are real and which are digital composites is getting harder.

Menlo Park-based Meta, owner of Facebook, Instagram and Threads, said Tuesday it will begin labeling AI-generated images when possible in the coming months. The company said in a blog post from Nick Clegg, Meta’s head of global affairs, that it is working “with industry partners on common technical standards for identifyin­g AI content,” including for video and audio.

Meta’s technology can then look for those digital markers, placed by themselves and other AI companies, and inform users that AI was involved in making the images. Videos, audio and, especially, text present more of a challenge.

Clegg said the company already labels images created with Meta’s AI image generator. He said while some other companies are starting to mark images as AI-generated, “they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies.”

With elections in the U.S. and across the world slated for this year, concerns are mounting about how easily AI programs can generate false images and other content that would allow bad actors to promote disinforma­tion about candidates and their positions.

“We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app,” Clegg wrote. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world.”

The company said in January that all political advertisem­ents would have to disclose when they use digitally altered images or video.

Google, which makes chatbots of its own, such as Bard, has taken a similar step when it comes to disclosing when AI is used in political advertisin­g. The company also said in December it is working on tools that directly embed digital “watermarks” in AIgenerate­d images and audio.

Clegg told Reuters that he is confident AI-generated images can be picked out using embedded digital markers, but admitted that detecting AI-based audio and video would be more difficult. He said identifyin­g AI-generated text likely would not be possible.

Kevin Guo, the CEO of Hive, a San Francisco company that makes technology able to identify when content is AI-generated, said he doubts Meta’s solution can be foolproof.

“I guess technicall­y it could work if you got 100% buy-in from every developer that made such a model that was released,” Guo said. But he compared Meta’s approach to right-clicking on an image and adding text identifyin­g a piece of content as AI-generated — which could then be removed.

“It requires everyone to leave it in. It requires everyone to include it and everyone who uses it to be a Good Samaritan,” Guo said.

His company — which he said works with large social media sites — uses a different approach, training its own AI models on content that is and is not AIgenerate­d so that it can tell the difference when given a questionab­le piece of content.

“Think of it like an AI model fighting an AI model,” Guo said.

Krishna Gade, a former Facebook engineer and now the CEO of AI trust and safety startup Fiddler AI, said the announceme­nt was “definitely a good step forward.”

“The lines are getting blurred between what is generated by humans and what’s generated by the machine,” Gade said, and companies need to be open with users about how their labeling works and how certain they are of its accuracy.

OpenAI, which makes the popular ChatGPT chatbot and whose technology can generate images and more, also announced in a blog post that it is taking steps to identify AI-generated imagery, but admitted those markers could be stripped out by users.

Images generated with ChatGPT and the DALL·E 3 image generator will be tagged with metadata identifyin­g them as having been made with AI tools. But, the company said on X, “Since the metadata can be removed, its absence doesn’t mean an image is not from ChatGPT or our API.”

 ?? Thibault Camus/Associated Press ?? Meta says it can place digital markers on AI images and find markers placed by other AI companies.
Thibault Camus/Associated Press Meta says it can place digital markers on AI images and find markers placed by other AI companies.

Newspapers in English

Newspapers from United States