Google, YouTube pushing to thwart zealotry
YouTube has struggled for years with videos that promote offensive viewpoints but do not necessarily violate the company’s guidelines for removal. Now it is taking a new approach: Bury them.
The issue has gained new prominence amid media reports that one of the London Bridge attackers became radicalized by watching YouTube videos of an American Islamic preacher, whose sermons have been described as employing extremely charged religious and sectarian language.
On Sunday, Google, YouTube’s parent company, announced a set of policies to curb extremist videos. For videos that are clearly in violation of its community guidelines, such as those promoting terrorism, Google said it would quickly identify and remove them. The process for handling videos that do not necessarily violate specific rules of conduct is more complicated.
Under the policy change, Google said offensive videos that did not meet its standard for removal — for example, videos promoting the subjugation of religions or races without inciting violence — would come with a warning and could not be monetized with advertising, or be recommended, endorsed or commented on by users. Such videos were already not allowed to include advertising, but they were not restricted in any other way.
“That means these videos will have less engagement and be harder to find,” Kent Walker, Google’s general counsel and senior vice president, wrote in a company blog post. “We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”
Google, which has relied on automated video analysis for the removal of most of its terrorism-related content, said it would devote more engineers to help identify and remove potentially problematic videos. It also said it would enlist experts from nongovernmental
organizations to help determine which videos were violent propaganda and which were religious or newsworthy speech.
The Mountain View company said it would rely on the specialized knowledge of groups with experts on issues like hate speech, selfharm and terrorism. The company also said it planned to work with counter extremist groups to help identify content that tried to radicalize or recruit extremists.
By allowing anyone to upload videos to YouTube, Google has created a thriving service that appeals to people with a wide range of interests. But it has also become a magnet for extremists, who can reach a wide audience for their racist or intolerant views.
Google has long wrestled with how to curb that content without inhibiting the freedom that makes YouTube popular.
Part of the challenge is the sheer volume of videos uploaded. The company has said that more than 400 hours of video content is uploaded every minute, and YouTube has been unable to police that in real time. Users flag offensive videos for review, while the company’s algorithms comb the site for potential problems. Videos with nudity, graphic violent footage or copyrighted material are usually taken down quickly.
Companies throughout the tech industry are working on how to keep services for user-generated content open without allowing them to become dens of extremism. Like YouTube, social media companies have found that policing content is a never-ending challenge. Last week, Facebook said it will use artificial intelligence and human moderators to root out extremist content. Twitter said it suspended 377,000 accounts in the second half of 2016 for violations related to the “promotion of terrorism.”
In the aftermath of terror attacks in Manchester and London, British Prime Minister Theresa May criticized large Internet companies for providing the “safe space” that allows radical ideologies to spread. According to news media reports, friends and relatives of Khuram Shazad Butt, identified as one of the three knifewielding attackers on London Bridge, were worried about the influence of YouTube videos of sermons by Ahmad Musa Jibril, an Islamic cleric from Michigan.
Jibril’s sermons demonstrate YouTube’s quandary because he “does not explicitly call to violent jihad, but supports individual foreign fighters and justifies the Syrian conflict in highly emotive terms,” according to a report by the International Center for the Study of Radicalization and Political Violence.
A spokesman for YouTube said the new policies were not the result of any single violent episode, but part of an effort to improve its service. Google did not respond to a question about whether Jibril’s videos would fall under Google’s guidelines for videos containing inflammatory language but not violating its policies. Jibril still has videos on YouTube, but without ads.
In its blog post, Google acknowledged that “more needs to be done” to remove terrorismrelated content. YouTube said it would do more in “counter-radicalization” efforts, including targeting potential Islamic State recruits with videos that could change their minds about joining the organization. Google said that in previous counterradicalization attempts, users clicked on ads at an “unusually high rate” to watch videos that debunk terrorism recruitment messages.
Google also announced a series of measures to identify extremist videos more quickly, an effort that the company started this year as YouTube tries to assure advertisers that it is safe for their marketing dollars.
YouTube came under fire this year when the Times of London and other news outlets found examples of brands that inadvertently funded extremist groups through automated advertising — a byproduct of YouTube’s revenuesharing model that provides content creators a portion of ad dollars.
Brands such as AT&T and Enterprise Rent-ACar pulled ads from YouTube. Google responded by changing the types of videos that can carry advertising, blocking ads on videos with hate speech or discriminatory content. Google also created a system to allow advertisers to exclude specific sites and channels in YouTube and Google’s display network.