Facebook live raises big Questions
On Easter, 37-year-old Steve Stephens drove around Glenville, Cleveland on what he said was an “Easter Sunday slaughter,” and soon he had garnered an audience of millions for shooting of Robert Godwin Sr., 74 which he had livestreamed on Facebook. In a series of turbulent Facebook posts, Stephen also claimed he had murdered fourteen random strangers.
Despite the incident occurring on Sunday, April 16, the public did not hear from Facebook CEO Mark zuckerberg until Tuesday. when he did, he went with a cursory mention during a speech at Facebook’s annual developer conference.
“we have a lot more to do here. And we’re reminded of that this week by the tragedy in Cleveland. And our hearts go out to the family and friends of Robert Godwin Sr. And we have a lot of work and we will keep doing all we can to prevent tragedies like this from happening,” zuckerberg said.
The main issue people have with Facebook’s involvement in this incident is that it took two hours to take down the nefarious footage. The victim’s grandson, Ryan Godwin, begged people to stop sharing the footage writing on Twitter, “That is my grandfather show some respect.”
Facebook is currently testing out multiple ways to filter out violent material – including AI technology. It is unclear if artificial intelligence played a role in flagging Stephen’s footage.
Incidents in which people commit violent crimes and post them on Facebook are growing at an unprecedented frequency. Mental health experts warn livestreams risk desensitizing the public and copycat violence.
There are legions of human Facebook moderators hunting for pornography, gore, sexual solicitation, minors, sexual images, racism, hateful taunts, and brutal violence. Users report the offensive material to Facebook moderators, who then decide whether to remove the content or disable the user’s account.
Justin Osofsky, vice president of global operations and media partnerships at Facebook, said the company disabled Stephens’ account within 23 minutes of receiving the first report about a murder video and two hours after receiving a report of any kind.
Facebook opened its Live feature to the public last year and has since been pushing its 2 billion users to try out the new feature with advertising campaigns and featuring livestreams in users’ news feeds. It’s harrowing to understand why Facebook was so unprepared for the consequences of the push. Last July, the death of Philando Castila, a Minnesota man fatally shot by the police during a traffic stop in the suburbs of St. Paul, was broadcast by his girlfriend live across Facebook. The video has been labeled as graphic and violent did not capture the shooting itself, however, it remains accessible on Facebook. In January, three men in Sweden were arrested on suspicion of raping a woman and streaming the assault live to a private Facebook group. Sadly, no one called the police. In February, two radio journalist in the Dominican Republic were fatally shot during a Facebook Live broadcast.
For now, the company relies on human moderators and artificial intelligence to remove offensive or harmful content. This approach raises quite a few concerns - for instance, last year, a Facebook moderator took down an iconic, Pulitzer Prize-winning photo from the Vietnam war depicting a naked girl running from a napalm strike. Facebook restored the photo after facing criticism. The problem is,
artificial intelligence would inevitably make mistakes. The company would be accused of censorship if the algorithm wrongly deleted photos and videos. This explains why Facebook takes an unreasonable amount of time to determine if it should delete a video. Eventually, though, AI will work alongside human moderators, to effectively identify offensive content.