San Francisco Chronicle

YouTube says AI catching problem videos

- By Daisuke Wakabayash­i

The vast majority of videos removed from YouTube toward the end of last year for violating the site’s content guidelines had first been detected by machines instead of humans, the Google-owned company said this week.

YouTube said it took down 8.28 million videos during the fourth quarter of 2017, and about 80 percent of those videos had initially been flagged by artificial­ly intelligen­t computer systems.

The new data highlighte­d the significan­t role machines — not just users, government agencies and other organizati­ons — are taking in policing the service as it faces increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organizati­ons.

Those videos are sometimes promoted by YouTube’s recommenda­tion system and unknowingl­y financed by advertiser­s, whose ads are placed next to them through an automated system.

This was the first time that YouTube had publicly disclosed the number of videos it removed in a quarter, making it hard to judge how aggressive the platform has been in removing content, or the extent to which computers played a part in making those decisions.

Figuring out how to remove unwanted videos — and balancing that with free speech — is a major challenge for the future of YouTube, said Eileen Donahoe, executive director at Stanford University’s Global Digital Policy Incubator.

“It’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side,” Donahoe said. “It’s a hard problem to solve.”

YouTube declined to disclose whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represent. But the company said they represente­d “a fraction of a percent” of YouTube’s total views during the quarter.

Betting on improvemen­ts in artificial intelligen­ce is a common Silicon Valley approach to dealing with problemati­c content; Facebook has also said it is counting on AI tools to detect fake accounts and fake news on its platform. But critics have warned against depending too heavily on computers to replace human judgment.

It is not easy for a machine to tell the difference between, for example, a video of a real shooting and a scene from a movie. And some videos slip through the cracks. Last year, parents complained that violent or provocativ­e videos were finding their way to YouTube Kids, an app that is supposed to contain only child-friendly content that has automatica­lly been filtered from the main YouTube site.

YouTube has contended that the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors.

In December, Google said it would hire 10,000 people this year to address policy violations across its platforms. In a blog post Monday, YouTube said it had filled most of the jobs allotted to it, including specialist­s with expertise in violent extremism, counterter­rorism and human rights, as well as expanding regional teams. It was not clear what YouTube’s final share of the total would be.

YouTube said threequart­ers of all videos flagged by computers had been removed before anyone had a chance to watch them.

The company’s machines can detect when a person tries to upload a video that has already been taken down and will prevent that video from reappearin­g on the site. In some cases with videos containing nudity or misleading content, YouTube said its computer systems are adept enough to delete the video without requiring a human to review the decision.

The company said its machines are also getting better at spotting violent extremist videos, which tend to be harder to identify and have fairly small audiences.

At the start of 2017, before YouTube introduced its machine-learning technology to help computers identify videos associated with violent extremists, 8 percent of videos flagged and removed for that kind of content had fewer than 10 views. In the first quarter of this year, the company said, more than half of the videos flagged and removed for violent extremism had fewer than 10 views.

Even so, users still play a meaningful role in identifyin­g problemati­c content. The top three reasons users flagged videos during the quarter involved content they considered sexual, misleading or spam, and hateful or abusive.

YouTube said users raised 30 million flags on roughly 9.3 million videos during the quarter. In total, 1.5 million videos were removed after first being flagged by users.

Daisuke Wakabayash­i is a New York Times writer.

 ?? Roger Kisby / New York Times 2017 ?? Google, which owns YouTube, had a big presence at the Consumer Electronic­s Show in Las Vegas last year.
Roger Kisby / New York Times 2017 Google, which owns YouTube, had a big presence at the Consumer Electronic­s Show in Las Vegas last year.

Newspapers in English

Newspapers from United States