META IS LABELING MORE AI:BUILT VIDEO, AUDIO AND IMAGES
The company also says it may add a more prominent label if the content has "a particularly high risk of materially deceiving the public on a matter of importance."
Meta -- owner of Facebook, Instagram, Whatsapp and Threads -- said Friday that it plans to expand efforts to label content that's been manipulated or generated by artigcial intelligence. The move expands on earlier efforts, with Meta's platforms among a growing number of services, including Youtube and Tiktok, that are responding to this issue.
Meta said it will label video, audio and images as "Made with AI" either when its systems detect AI involvement, or when creators disclose it during an upload. The company also said it may add a more prominent label if the content has "a particularly high risk of materially deceiving the public on a matter of importance."
The company said it came to its decision while juggling transparency with the need to avoid unnecessarily restricting freedom of expression online.
"This overall approach gives people more information about the content so they can better assess it and so they will have context if they see the same content elsewhere," Monika Bickert, Meta's VP of content policy, wrote in a blog post.
The move marks another way the tech industry is responding to growing concerns about the pervasiveness of Ai-generated content and its risk to the public. Videos generated by AI technology like Openai's Sora look increasingly lifelike. And though that tool hasn't been made widely available to the public, other AI technologies have already begun to cause public confusion and chaos.
Earlier this year, a political consultant made mass-scale robocalls using President Joe Biden's voice, re-created by AI, encouraging people in New Hampshire not to vote in the primary election. Experts say more AI disinformation is likely on the way, particularly with the upcoming 2024 presidential election.
Meta isn't the only social media company working to identify Aipowered content. Tiktok said last year that it will launch a tool to help creators label manipulated content, noting that it also prohibits "deepfakes" -- videos, images or audio that's been created to mislead viewers about real events or people. Meanwhile, Google's Youtube subsidiary began requiring disclosure of Ai-manipulated videos from creators last month, saying that some examples included "realistic" likenesses of people or scenes, as well as altered footage of real events or places.