Waikato Times

Battle against vile videos gets results

- DAISUKE WAKABAYASH­I

Most videos removed from YouTube towards the end of last year for violating the site’s content guidelines had first been detected by machines instead of humans, the Google-owned company said.

YouTube said it took down 8.28 million videos during the fourth quarter of 2017, and about 80 per cent of those videos had initially been flagged by artificial­ly intelligen­t computer systems.

The new data highlighte­d the significan­t role machines, not just users, government agencies and other organisati­ons, were taking in policing the service as it faced increased scrutiny over the spread of conspiracy videos, fake news and violent content from extremist organisati­ons.

Those videos are sometimes promoted by YouTube’s recommenda­tion system and unknowingl­y financed by advertiser­s, whose ads are placed next to them through an automated system.

This was the first time that YouTube had publicly disclosed the number of videos it removed in a quarter, making it hard to judge how aggressive the platform has previously been in removing content, or the extent to which computers played a part in making those decisions.

Figuring out how to remove unwanted videos and balancing that with free speech was a major challenge for the future of YouTube, said Eileen Donahoe, executive director at Stanford University’s Global Digital Policy Incubator.

‘‘It’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side,’’ Donahoe said. ‘‘It’s a hard problem to solve.’’

YouTube declined to disclose whether the number of videos it had removed had increased from the previous quarter or what percentage of its total uploads those 8.28 million videos represente­d. But the company said the takedowns represente­d ‘‘a fraction of a per cent’’ of YouTube’s total views during the quarter.

Betting on improvemen­ts in artificial intelligen­ce is a common Silicon Valley approach to dealing with problemati­c content; Facebook has also said it is counting on AI tools to detect fake accounts and fake news on its platform. But critics have warned against depending too heavily on computers to replace human judgment.

It is not easy for a machine to tell the difference between, for example, a video of a real shooting and a scene from a movie. And some videos slip through the cracks, with embarrassi­ng results.

Last year, parents complained that violent or provocativ­e videos were finding their way to YouTube Kids, an app that is supposed to contain only child-friendly content that has automatica­lly been filtered from the main YouTube site.

YouTube has contended that the volume of videos uploaded to the site is too big of a challenge to rely only on human monitors.

Still, in December, Google said it was hiring 10,000 people in 2018 to address policy violations across its platforms.

YouTube said it had filled the majority of the jobs that had been allotted to it, including specialist­s with expertise in violent extremism, counterter­rorism and human rights, as well as expanding regional teams. It was not clear what YouTube’s final share of the total would be.

Still, YouTube said threequart­ers of all videos flagged by computers had been removed before anyone had a chance to watch them.

The company’s machines can detect when a person tries to upload a video that has already been taken down and will prevent that video from reappearin­g on the site. And in some cases with videos containing nudity or misleading content, YouTube said its computer systems are adept enough to delete the video without requiring a human to review the decision.

The company said its machines are also getting better at spotting violent extremist videos, which tend to be harder to identify and have fairly small audiences.

At the start of 2017, before YouTube introduced so-called machine-learning technology to help computers identify videos associated with violent extremists, 8 per cent of videos flagged and removed for that kind of content had fewer than 10 views. In the first quarter of 2018, the company said, more than half of the videos flagged and removed for violent extremism had fewer than 10 views.

Even so, users still play a meaningful role in identifyin­g problemati­c content. The top three reasons users flagged videos during the quarter involved content they considered sexual, misleading or spam, and hateful or abusive.

YouTube said users had raised 30 million flags on roughly 9.3 million videos during the quarter. In total, 1.5 million videos were removed after first being flagged by users.

 ?? GETTY IMAGES ?? In December, Google said it was hiring 10,000 people this year to address policy violations across its platforms.
GETTY IMAGES In December, Google said it was hiring 10,000 people this year to address policy violations across its platforms.

Newspapers in English

Newspapers from New Zealand