Painful lessons for tech sites
Platforms better in handling violent videos from shootings, but still have a long way to go
These days, perpetrators of mass shootings like the one in last week’s supermarket attack in Buffalo, New York, don’t stop with planning out their brutal attacks. They also create marketing plans while arranging to livestream their massacres on social platforms in hopes of fomenting more violence.
Sites like Twitter, Facebook and now the game-streaming platform Twitch have learned painful lessons from dealing with the violent videos that often accompany such shootings. But experts are calling for a broader discussion around livestreams, including whether they should exist at all, since once such videos go online, they’re almost impossible to erase completely.
The self-described white supremacist gunman who police say killed 10 people, all of them Black, on Saturday had mounted a camera to his helmet to stream his assault live on Twitch, the video game-streaming platform used in 2019 by another shooter who killed two people at a synagogue in Halle, Germany.
He had previously outlined his plan in a detailed but rambling set of online diary entries that were apparently posted publicly ahead of the attack, although it’s not clear how may people might have seen them. His goal: to inspire copycats and spread his racist beliefs.
He decided against streaming on Facebook, as yet another mass shooter did when he killed 51 people at two mosques in Christchurch, New Zealand, three years ago. Unlike Twitch, Facebook requires users to sign up for an account in order to watch livestreams.
By most accounts the platforms responded more quickly to halt the spread of the Buffalo video than they did after the 2019 Christchurch shooting, said Megan Squire, a senior fellow and technology expert at the Southern Poverty Law Center.
Another Twitch user watching the live video likely flagged it to the attention of Twitch’s content moderators, she said, which would have helped Twitch pull down the stream less than two minutes after the first gunshots per a company spokesperson. Twitch has not said how the video was flagged.
“In this case, they did pretty well,” Squire said. “The fact that the video is so hard to find right now is proof of that.”
In 2019, the Christchurch shooting was streamed live on Facebook for 17 minutes and spread to other platforms. This time, the platforms generally seemed to coordinate better, particularly by sharing digital “signatures” of the video used to detect and remove copies.
But platform algorithms can have a harder time identifying a copycat video if someone has edited it. That’s created problems, such as when some internet forum users remade the Buffalo video with twisted attempts at humor. Tech companies would have needed to use “more fancy algorithms” to detect those partial matches, Squire said.
Twitch has more than 2.5 million viewers at any given moment; roughly 8 million content creators stream video on the platform each month, according to the company. The site uses a combination of user reports, algorithms and moderators to detect and remove any violence that occurs on the platform.
Looking ahead, platforms may face future moderation complications from a Texas law — reinstated by an appellate court last week — that bans big social media companies from “censoring” users’ viewpoints.
The shooter “had a very specific viewpoint” and the law is unclear enough to create a risk for platforms that moderate people like him, said Jeff Kosseff, an associate professor of cybersecurity law at the U.S. Naval Academy. “It really puts the finger on the scale of keeping up harmful