Santa Fe New Mexican

Facebook: Users slow to report video of shooting

- By Kelvin Chan and Anick Jesdanun Associated Press

LONDON — Why did Facebook air live video of the New Zealand mosque shooting for 17 minutes? Didn’t anyone alert the company while it was happening?

Facebook says no. According to its deputy general counsel, Chris Sonderby, none of the 200 or so people who watched the live video flagged it to moderators. In a Tuesday blog post, Sonderby said the first user report didn’t come until 12 minutes after the broadcast ended.

All of which raises additional questions — among them, why so many people watched without saying anything, whether Facebook relies too much on outsiders and machines to report trouble, and whether users and law enforcemen­t officials even know how to reach Facebook with concerns about what they’re seeing on the service.

“When we see things through our phones, we imagine that they are like a television show,” said Siva Vaidhyanat­han, a professor of media studies at the University of Virginia. “They are at a distance, and we have no power.”

Facebook said it removed the video “within minutes” of being notified by New Zealand police. But since then, Facebook and other social media companies have had to contend with copies posted by others.

The shooting suspect carefully modeled his attack for an internet age as he livestream­ed the killing of 50 people at two mosques in Christchur­ch, New Zealand.

Tim Cigelske, who teaches about social media at Marquette University in Milwaukee, said that while viewers have the same moral obligation­s to help as a bystander does in the physical world, people don’t necessaril­y know what to do.

“It’s like calling 911 in an emergency,” he said. “We had to train people and make it easy for them. You have to train people in a new way if you see an emergency happening not in person but online.”

To report live video, a user must know to click on a small set of three gray dots on the right side of the post. A user who clicks on “report live video” gets a choice of objectiona­ble content types to select from, including violence, bullying and harassment. Users are also told to contact law enforcemen­t if someone is in immediate danger.

Facebook uses artificial intelligen­ce to detect objectiona­ble material while relying on the public to flag content that violates its standards. Those reports are then sent to human reviewers, the company said in a November video.

The video also outlined how it uses “computer vision” to detect 97 percent of graphic violence before anyone reports it. However, it’s less clear how these systems apply to Facebook’s livestream­ing.

Experts say live video poses unique challenges, and complaints about livestream­ing suicides, murders and beatings regularly come up. Nonetheles­s, they say Facebook cannot deflect responsibi­lity.

“If they cannot handle the responsibi­lity, then it’s their fault for continuing to provide that service,” said Mary Anne Franks, a law professor at the University of Miami. She calls it “incredibly offensive and inappropri­ate” to pin responsibi­lity on users subjected to traumatic video.

Newspapers in English

Newspapers from United States