On social media, no answers for hate
SAN FRANCISCO — On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.
A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrated the Sept. 11, 2001, terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviation used for the Nazi salute “Heil Hitler.”
The Instagram posts demonstrated a stark reality. Over the past 10 years, Silicon Valley’s social media companies have expanded their reach and influence to the farthest corners of the world. But it has become glaringly apparent that the companies never quite understood the negative consequences of that influence nor what to do about it — and that they cannot put the genie back in the bottle.
“Social media is emboldening people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”
The repercussions of the social media companies’ inability to handle disinformation and hate speech have manifested themselves abundantly in recent days. Cesar Sayoc Jr., who was charged last week with sending explosive devices to prominent Democrats, appeared to have been radicalized online by partisan posts on Twitter and Facebook. Robert Bowers, who police say killed 11 people at the Tree of Life synagogue in Pittsburgh on Saturday, posted about his hatred of Jews on Gab, a 2-year-old social network.
The effects of social media were also evident globally. Close watchers of Brazil’s election Sunday ascribed much of the appeal of the victor, far-right populist Jair Bolsonaro, to what unfolded on social media there. Interests tied to Bolsonaro’s campaign appeared to have flooded WhatsApp, the messaging application owned by Facebook, with a deluge of political content that gave wrong information on voting locations and times, false instructions on how to vote for particular candidates and outright disparaged one of Bolsonaro’s main opponents, Fernando Haddad.
Elsewhere, high-ranking members of the Myanmar military have used doctored messages on Facebook to foment anxiety and fear against the Muslim Rohingya minority group. And in India, fake stories on WhatsApp about child kidnappings led mobs to murder more than a dozen people this year.
“Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream,” said Jonathan Greenblatt, chief executive of the Anti-Defamation League, a nongovernmental organization that combats hate speech. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”
Facebook said it was investigating the anti-Semitic hashtags on Instagram after the New York Times flagged them. Sarah Pollack, a Facebook spokeswoman, said in a statement that Instagram was seeing new posts and other content related to this weekend’s events and that it was “actively reviewing hashtags and content related to these events and removing content that violates our policies.”
YouTube said it has strict policies prohibiting content that promotes hatred or incites violence and added that it takes down videos that violate those rules.
Social media companies have said that identifying and removing hate speech and disinformation — or even defining what constitutes such content — is difficult. Facebook said this year that only 38 percent of hate speech on its site was flagged by its internal systems. In contrast, its systems pinpointed and took down 96 percent of what it defined as adult nudity, and 99.5 percent of terrorist content.
YouTube said users reported nearly 10 million videos from April to June for potentially violating its community guidelines. Just under 1 million of those videos were found to have broken the rules and were removed, according to the company’s data. YouTube’s automated detection tools also took down an additional 6.8 million videos in that period.
A study by researchers from MIT that was published in March found that falsehoods on Twitter were 70 percent more likely to be retweeted than accurate news.
Facebook, Twitter and YouTube have all announced plans to invest heavily in artificial intelligence and other technology aimed at finding and removing unwanted content from their sites. Facebook has also said it would hire 10,000 additional people to work on safety and security issues, and YouTube has said it planned to have 10,000 people dedicated to reviewing videos. Jack Dorsey, Twitter’s chief executive, recently said that although the company’s longtime principle was free expression, it was discussing how “safety should come first.”
Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream.” Jonathan Greenblatt, chief executive, Anti-Defamation League