Sun Sentinel Palm Beach Edition

Hateful, violent videos not the bulk of what YouTube removes

- By Craig Timberg and Tony Romm

YouTube removed 7.8 million videos and 1.6 million channels in the third quarter of this year, mostly for spreading spam or posting inappropri­ate adult content, the company said in a report last week.

The Community Guidelines Enforcemen­t Report comes amid growing questions about how YouTube monitors and deletes problemati­c content from the platform, including videos depicting violent extremism and hateful, graphic content. Such videos remain a small percentage of the overall number that YouTube deletes, but the prevalence of such content has been the subject of news reports and congressio­nal scrutiny.

The enforcemen­t report, the fourth of its kind for the Google subsidiary, covers July through September and is the first to break out the reasons for removing videos. It is also the first to report the number of channels removed in their entirety for violating YouTube’s “community guidelines.” Channels are removed when they get three strikes within 90 days, or for a single particular­ly egregious offense, such as predatory behavior.

The report does not say how many videos get flagged by users as inappropri­ate but are not removed after company moderators review them.

“Finding all violative content on YouTube is an immense challenge, but we see this as one of our core responsibi­lities and are focused on continuous­ly working towards removing this content before it is widely viewed,” the company said in a blog posted with the release of the report.

The report offers little insight into how YouTube is managing the large amount of hateful, conspirato­rial videos posted to the platform or on its role as a video library for users of Gab.ai and 4chan, social media sites that are popular with racists, antiSemite­s and others pushing extremist ideologies. Users of Gab and 4chan’s “Politicall­y Incorrect” board link to YouTube thousands of times a day, more than to any other outside site, researcher­s have found.

The report said that 81 percent of videos that end up being removed are first detected by automated systems, and that of this group, the detection happened before a single view by users in 3 out of 4 cases. However, YouTube and its parent Google rely heavily on humans to help in the effort. Google had previously set a goal of having 10,000 people working on content moderation by the end of the year.

More than 90 percent of videos uploaded in September and removed for violating guidelines against violent extremism or child safety had fewer than 10 views. (Child safety is a broad category, including videos that portray dangerous behaviors, with child pornograph­y amounting to a small percentage of the overall content.)

YouTube also removed

224 million comments during the three-month period covered by the report, mostly for violating rules against spam.

Conservati­ves have repeatedly accused YouTube and other leading technology companies of seeking to suppress their views, but others have pushed for the platform to act more aggressive­ly toward content that spreads clearly false and hateful messages.

During Tuesday’s congressio­nal hearing, Rep. Jamie B. Raskin, D-Md., questioned Google chief executive Sundar Pichai about a report in The Washington Post on the spread of videos falsely claiming that Democrat Hillary Clinton had attacked, killed and drank the blood of a girl. Pichai promised more action was coming from the company in addressing such issues.

YouTube said that 6,195 videos it removed in September were found to have violated guidelines against “hateful or abusive” content, about 0.2 percent of the total deleted that month. And 94,400, or 3.4 percent of the total deleted in September, were found to have violated guidelines against “violent or graphic” content. (YouTube didn’t provide specific numbers on some other metrics for the entire three-month period covered by the report overall.)

YouTube’s report comes the same week that another tech giant, Twitter, released new data about its efforts to combat hate speech and other abusive content online. More than 6.2 million unique accounts were flagged to the company in the first six months of 2018 for violating its rules.

Newspapers in English

Newspapers from United States