Santa Fe New Mexican

On social media, no answers for hate

- By Sheera Frenkel, Mike Isaac and Kate Conger

SAN FRANCISCO — On Monday, a search on Instagram, the photo-sharing site owned by Facebook, produced a torrent of anti-Semitic images and videos uploaded in the wake of Saturday’s shooting at a Pittsburgh synagogue.

A search for the word “Jews” displayed 11,696 posts with the hashtag “#jewsdid911,” claiming that Jews had orchestrat­ed the Sept. 11, 2001, terror attacks. Other hashtags on Instagram referenced Nazi ideology, including the number 88, an abbreviati­on used for the Nazi salute “Heil Hitler.”

The Instagram posts demonstrat­ed a stark reality. Over the past 10 years, Silicon Valley’s social media companies have expanded their reach and influence to the farthest corners of the world. But it has become glaringly apparent that the companies never quite understood the negative consequenc­es of that influence nor what to do about it — and that they cannot put the genie back in the bottle.

“Social media is emboldenin­g people to cross the line and push the envelope on what they are willing to say to provoke and to incite,” said Jonathan Albright, research director at Columbia University’s Tow Center for Digital Journalism. “The problem is clearly expanding.”

The repercussi­ons of the social media companies’ inability to handle disinforma­tion and hate speech have manifested themselves abundantly in recent days. Cesar Sayoc Jr., who was charged last week with sending explosive devices to prominent Democrats, appeared to have been radicalize­d online by partisan posts on Twitter and Facebook. Robert Bowers, who police say killed 11 people at the Tree of Life synagogue in Pittsburgh on Saturday, posted about his hatred of Jews on Gab, a 2-year-old social network.

The effects of social media were also evident globally. Close watchers of Brazil’s election Sunday ascribed much of the appeal of the victor, far-right populist Jair Bolsonaro, to what unfolded on social media there. Interests tied to Bolsonaro’s campaign appeared to have flooded WhatsApp, the messaging applicatio­n owned by Facebook, with a deluge of political content that gave wrong informatio­n on voting locations and times, false instructio­ns on how to vote for particular candidates and outright disparaged one of Bolsonaro’s main opponents, Fernando Haddad.

Elsewhere, high-ranking members of the Myanmar military have used doctored messages on Facebook to foment anxiety and fear against the Muslim Rohingya minority group. And in India, fake stories on WhatsApp about child kidnapping­s led mobs to murder more than a dozen people this year.

“Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream,” said Jonathan Greenblatt, chief executive of the Anti-Defamation League, a nongovernm­ental organizati­on that combats hate speech. “In the past, they couldn’t find audiences for their poison. Now, with a click or a post or a tweet, they can spread their ideas with a velocity we’ve never seen before.”

Facebook said it was investigat­ing the anti-Semitic hashtags on Instagram after the New York Times flagged them. Sarah Pollack, a Facebook spokeswoma­n, said in a statement that Instagram was seeing new posts and other content related to this weekend’s events and that it was “actively reviewing hashtags and content related to these events and removing content that violates our policies.”

YouTube said it has strict policies prohibitin­g content that promotes hatred or incites violence and added that it takes down videos that violate those rules.

Social media companies have said that identifyin­g and removing hate speech and disinforma­tion — or even defining what constitute­s such content — is difficult. Facebook said this year that only 38 percent of hate speech on its site was flagged by its internal systems. In contrast, its systems pinpointed and took down 96 percent of what it defined as adult nudity, and 99.5 percent of terrorist content.

YouTube said users reported nearly 10 million videos from April to June for potentiall­y violating its community guidelines. Just under 1 million of those videos were found to have broken the rules and were removed, according to the company’s data. YouTube’s automated detection tools also took down an additional 6.8 million videos in that period.

A study by researcher­s from MIT that was published in March found that falsehoods on Twitter were 70 percent more likely to be retweeted than accurate news.

Facebook, Twitter and YouTube have all announced plans to invest heavily in artificial intelligen­ce and other technology aimed at finding and removing unwanted content from their sites. Facebook has also said it would hire 10,000 additional people to work on safety and security issues, and YouTube has said it planned to have 10,000 people dedicated to reviewing videos. Jack Dorsey, Twitter’s chief executive, recently said that although the company’s longtime principle was free expression, it was discussing how “safety should come first.”

Social media companies have created, allowed and enabled extremists to move their message from the margins to the mainstream.” Jonathan Greenblatt, chief executive, Anti-Defamation League

Newspapers in English

Newspapers from United States