Los Angeles Times

Shooting put YouTube to test

Crisis teams fought an uphill battle to delete uploads of the deadly New Zealand attack.

- By Elizabeth Dwoskin and Craig Timberg

As a grisly video recorded by the alleged perpetrato­r of Friday’s bloody massacres at two New Zealand mosques played out on YouTube and other social media, Neal Mohan, 3,700 miles away in San Bruno, Calif., had the sinking realizatio­n that his company was going to be overmatche­d — again.

Mohan, YouTube’s chief product officer, had assembled his war room — a group of senior executives known internally as “incident commanders” who jump into crises, such as when footage of a suicide or shooting spreads online.

The team worked through the night, trying to

identify and remove tens of thousands of videos — many repackaged or recut versions of the original footage that showed the horrific murders. As soon as the group took down one, another would appear, as quickly as one per second in the hours after the shooting, Mohan said.

As its efforts faltered, the team finally took unpreceden­ted steps — including temporaril­y disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips had been altered in ways that outsmarted the company’s detection systems.

“This was a tragedy that was almost designed for the purpose of going viral,” Mohan said in an interview that offered YouTube’s first detailed account of how the crisis unfolded inside the world’s largest video site. “We’ve made progress, but that doesn’t mean we don’t have a lot of work ahead of us, and this incident has shown that — especially in the case of more viral videos like this one — there’s more work to be done.”

The uploads came more rapidly and in far greater volume than during previous mass shootings, Mohan said. Video, mainly from victims’ points of view, spread online from the shootings at a concert in Las Vegas in October 2017 and at a Pittsburgh synagogue last October. But neither incident included a livestream recorded by the perpetrato­r. In New Zealand, the shooter apparently wore a bodymounte­d camera as he fired into crowds of worshipers.

Each public tragedy that has played out on YouTube has exposed a profound flaw in its design that allows hate and conspiraci­es to flourish online. YouTube is one of the crown jewels of Google’s stable of massively profitable and popular online services, but for many hours, it could not stop the flood of users who uploaded and re-uploaded the footage showing the mass murder of Muslims. About 24 hours later — after round-the-clock toil — company officials felt the problem was increasing­ly controlled, but acknowledg­ed that the broader challenges were far from resolved.

“Every time a tragedy like this happens we learn something new, and in this case it was the unpreceden­ted volume” of videos, Mohan said. “Frankly, I would have liked to get a handle on this earlier.”

The company — which has come under increasing fire for allowing Russians to interfere in the 2016 U.S. presidenti­al election through its site and for being slow to catch inappropri­ate content — has worked behind the scenes for more than a year to improve its systems for detecting and removing problemati­c videos. It has hired thousands of human content moderators and has built new software that can direct viewers to more authoritat­ive news sources more quickly during times of crisis. But YouTube’s struggles during and after the New Zealand shooting have brought into sharp relief the limits of the computeriz­ed systems and operations that Silicon Valley companies have developed to manage the massive volumes of user-generated content on their sprawling services.

In this case, humans determined to beat the company’s detection tools won the day — to the horror of people watching around the world.

YouTube was not alone in struggling to control the fallout Friday and over the weekend. The rapid online disseminat­ion of videos of the terrorist attack — as well as a manifesto, apparently written by the shooter, that railed against Muslims and immigrants — seemed shrewdly planned to reach as many people online as possible.

The attack at one of the two mosques was livestream­ed by the alleged shooter on Facebook, and it was almost instantane­ously uploaded to other video sites, most prominentl­y YouTube. The shooter appealed to online communitie­s, particular­ly supporters of YouTube star PewDiePie, to share the video. (PewDiePie, whose real name is Felix Arvid Ulf Kjellberg, swiftly disavowed him.)

Many of the uploaders made small modificati­ons to the video, such as adding watermarks or logos to the footage or altering the size of the clips, to defeat YouTube’s ability to detect and remove it. Some even turned the people in the footage into animations, as if a video game were playing out. For many hours, video of the attack could be easily found using such simple basic terms as “New Zealand.”

Facebook said it removed 1.5 million videos depicting images from the shooting in the first 24 hours after it happened — with 1.2 million of those blocked by software at the moment of upload. Reddit, Twitter and other platforms also scrambled to limit the spread of content related to the attack. YouTube declined to say how many videos it removed.

YouTube has been under fire over the last two years for spreading Russian disinforma­tion, violent extremism, hateful conspiracy theories and inappropri­ate children’s content.

Just in the last month, there have been scandals over pedophiles using YouTube’s comment system to highlight sexualized images of children and, separately, a Florida pediatrici­an’s discovery that tips on how to commit suicide had been spliced into children’s videos on YouTube and its children-focused app, YouTube Kids.

Pedro Domingos, a professor of computer science at the University of Washington, said that artificial intelligen­ce is much less sophistica­ted than many people believe and that as Silicon Valley companies compete for business, they often portray their systems as more powerful than they actually are. In fact, even the most advanced artificial intelligen­ce systems still are fooled in ways that a human would easily detect.

“They’re kind of caught in a bind when something like this happens because they need to explain that their AI is really fallible,” Domingos said. “The AI is really not entirely up to the job.”

Other experts believe that the continuous spread of horrific content cannot be weeded out completely by social media companies when the core feature of their products enables people to post content publicly without prior review. Even if the companies hired tens of thousands more moderators, the decisions these humans make are prone to subjectivi­ty error — and AI will never be able to make the subtle judgment calls needed in many cases.

Former YouTube engineer Guillaume Chaslot, who left the company in 2013 and now runs the watchdog group AlgoTransp­arency, says YouTube has not made the systemic fixes necessary to make its platform safe — and he said it probably won’t without more public pressure.

“Unless users stop using YouTube, they have no real incentive to make big changes,” he said. “It’s still whack-a-mole fixes, and the problems come back every time.”

Political pressure is growing. Sen. Mark R. Warner (D-Va.) singled out YouTube in a sharply worded statement Friday. And both Democrats and Republican­s have called on social media companies to be more aggressive in policing their platforms to better control the spread of extremist, hateful ideologies and the violence they sometimes provoke.

YouTube executives say they began addressing content problems more aggressive­ly in late 2017 and early 2018. Around that time, Mohan tapped one of his most trusted deputies, Jennifer O’Connor, to help reorganize the company’s approach to trust and safety and to build a playbook for emerging problems. The teams created an “intel desk” and identified incident commanders who could leap into action during crises. The intel desk examines emerging trends not only on YouTube but also on other popular sites, such as Reddit.

The company announced it was hiring as many as 10,000 content moderators across all of Google to review problemati­c videos and other content that have been flagged by users or by AI software.

Executives also shored up YouTube’s software tools, particular­ly in response to breaking news incidents. They quietly built software, called a “breaking news shelf ” and a “top news shelf,” that is triggered when a major news incident occurs and people are going to YouTube to find informatio­n, either by searching for it or by coming across it on the homepage.

The breaking news shelf uses signals from Google News and other sources to show content from more authoritat­ive sources, such as mainstream media organizati­ons, sometimes bypassing the content that everyday users upload. Engineers also built a “developing news card,” which pops up on top of the main screen to give people informatio­n about a crisis even before they search.

More recently, the company said it made changes to its recommenda­tion algorithms, the popular contentsug­gestion software that is the way most users discover new videos.

The breaking news software worked as designed during the school shooting in Parkland, Fla., in February 2018, O’Connor said in an interview. But over the following days, another unexpected developmen­t emerged: Survivors of the school shooting began to be harassed online. Some videos alleging that these students were “crisis actors” and not true victims became extremely popular on YouTube.

Though the site had banned harassment since mid-2017, YouTube moderators were still learning how to apply its policies, O’Connor said, acknowledg­ing that mistakes were made.

Like the Parkland shooting, the New Zealand shooting presented another set of challenges that stressed the company’s systems, Mohan said.

When the original video was uploaded Thursday evening, Mohan said, the company’s breaking news shelf kicked in, as did the developing news cards, which ran as banners for all YouTube users to see. Basic searches directed viewers to authoritat­ive sources, and the autocomple­te feature was not suggesting inappropri­ate words as it had during other incidents.

Engineers also immediatel­y “hashed” the video, meaning that artificial intelligen­ce software would be able to recognize uploads of carbon copies, along with some permutatio­ns, and could delete them automatica­lly. Hashing techniques are widely used to prevent abuses of movie copyrights and to stop the re-uploading of identical videos of child pornograph­y or those featuring terrorist recruitmen­t.

But in this case, the hashing system was no match for the tens of thousands of permutatio­ns of video being uploaded about the shooting in real time, Mohan said. Although hashing technology can recognize simple variations — such as if a video is sliced in half — it cannot anticipate animations or twoto three-second snippets of content, particular­ly if the video is altered in some way.

“Like any piece of machine learning software, our matching technology continues to get better, but frankly, it’s a work in progress,” Mohan said.

Moreover, many news organizati­ons chose not to use the name of the alleged shooter, so people who uploaded videos about the shooting used different keywords and captions to describe their posts, presenting a challenge to the company’s detection systems and its ability to surface safe and trustworth­y content.

Mohan said he agreed with the editorial decision not to name shooters, but the name of a shooter is one of the most common search terms people use and a big clue for AI software.

The night of the shooting, Mohan worried that the company wasn’t moving quickly enough to address the problems. He made the unusual decision to suspend a core part of the company’s operations process: the use of human moderators.

Under normal circumstan­ces, software flags problemati­c content and routes it to human moderators. The reviewers then watch the video and make a decision.

But that system wasn’t working well enough during the crisis, so Mohan and other senior executives decided to bypass it in favor of software that could detect the most violent portions of the video. That meant the AI was in the driver’s seat to make a final and immediate call, enabling the company to block content far more quickly.

But the decision came with a huge trade-off, Mohan said: Many videos that were not problemati­c got swept up in the automatic deletions.

“We made the call to basically err on the side of machine intelligen­ce as opposed to waiting for human review,” he said. The publishers whose videos were erroneousl­y deleted can file an appeal with the company, he said.

By mid-Friday, Mohan still wasn’t satisfied with the results. He made another decision: to disable the company’s tool that allows people to search for “recent uploads.”

As of Monday, both the recent upload search and the use of moderators were still blocked. YouTube said they would stay disabled until the crisis subsides.

The company acknowledg­es that is not a final fix.

 ?? Isaac Brekken Variety/REX/Shuttersto­ck ?? YOUTUBE Chief Product Officer Neal Mohan, shown in January, says the mosque attack was “a tragedy that was almost designed for the purpose of going viral.”
Isaac Brekken Variety/REX/Shuttersto­ck YOUTUBE Chief Product Officer Neal Mohan, shown in January, says the mosque attack was “a tragedy that was almost designed for the purpose of going viral.”
 ?? Mick Tsikas EPA/Shuttersto­ck ?? CRISIS TEAMS at YouTube took unpreceden­ted steps to speed the removal of videos of the mosque massacre in New Zealand. Above, a worshiper prays at a memorial on Tuesday at the Al Noor Mosque in Christchur­ch.
Mick Tsikas EPA/Shuttersto­ck CRISIS TEAMS at YouTube took unpreceden­ted steps to speed the removal of videos of the mosque massacre in New Zealand. Above, a worshiper prays at a memorial on Tuesday at the Al Noor Mosque in Christchur­ch.

Newspapers in English

Newspapers from United States