Arkansas Democrat-Gazette

YouTube to require AI disclosure­s

- DAVEY ALBA Informatio­n for this article was contribute­d by Shirin Ghaffary of Bloomberg News.

YouTube, the video platform owned by Alphabet Inc.’s Google, will soon require video-makers to disclose when they’ve uploaded manipulate­d or synthetic content that looks realistic — including video that has been created using artificial intelligen­ce tools.

The policy update, which will go into effect sometime in the new year, could apply to videos that use generative AI tools to realistica­lly depict events that never happened, or show people saying or doing something they didn’t actually say or do.

“This is especially important in cases where the content discusses sensitive topics such as elections, ongoing conflicts and public health crises or public officials,” Jennifer Flannery O’Connor and Emily Moxley, YouTube vice presidents of product management, said in a company blog post Tuesday. Creators who repeatedly choose not to disclose when they’ve posted synthetic content may be subject to content removal, suspension from the program that allows them to earn ad revenue or other penalties, the company said.

When the content is digitally manipulate­d or generated, creators must select an option to display YouTube’s new warning label in the video’s descriptio­n panel. For certain types of content about sensitive topics — such as elections, ongoing conflicts and public health crises — YouTube will display a label more prominentl­y, on the video player itself. The company said it would work with creators before the policy rolls out to make sure they understand the new requiremen­ts and is developing its own tools to detect when the rules are violated.

Google — which makes tools that can create generative AI content and owns platforms that can distribute such content far and wide — is facing new pressure to roll out the technology responsibl­y. Earlier Tuesday, Kent Walker, the company’s president of legal affairs, published a company blog post laying out Google’s “AI Opportunit­y Agenda,” a white paper with policy recommenda­tions aimed to help government­s around the world think through developmen­ts in artificial intelligen­ce.

“Responsibi­lity and opportunit­y are two sides of the same coin,” Walker said in an interview. “It’s important that even as we focus on the responsibi­lity side of the narrative that we not lose the excitement or the optimism around what this technology will be able to do for people around the world.”

Like other user-generated media services, Google and YouTube have been under pressure to mitigate the spread of misinforma­tion across their platforms, including lies about elections and global crises like the covid-19 pandemic. Google has already started to grapple with concerns that generative AI could create a new wave of misinforma­tion, announcing in September that it would require “prominent” disclosure­s for AI-generated election ads. Advertiser­s were told they must include language like “This audio was computer-generated” or “This image does not depict real events” on altered election ads across Google’s platforms. The company also said YouTube’s community guidelines, which prohibit digitally manipulate­d content that may pose a serious risk of public harm, already apply to all uploaded content.

Newspapers in English

Newspapers from United States