Houston Chronicle

Facebook expands efforts to root out extremists on site

- By Davey Alba

Facebook on Tuesday announced a series of changes to limit hate speech and extremism on the social network, expanding its definition of terrorist organizati­ons and planning to deploy artificial intelligen­ce to better spot and block live videos of shooters.

The company is also expanding a program that redirects users searching for extremism to resources intended to help them leave hate groups behind.

The announceme­nt came the day before a hearing on Capitol Hill on how Facebook, Google and Twitter handle violent content. Lawmakers are expected to ask executives how they are handling posts from extremists.

Facebook, the world’s largest social network, has been under intense pressure to limit the spread of hate messages, pictures and videos on its site. It has also faced harsh criticism for not detecting and removing the live video of an Australian man who killed 51 people in Christchur­ch, New Zealand.

In at least three mass shootings this year, including the one in Christchur­ch, the violent plans were announced in advance on 8chan, an online message board. Federal lawmakers questioned the owner of 8chan this month.

In its announceme­nt post, Facebook said the Christchur­ch tragedy “strongly” influenced its updates. And the company said it had recently developed an industry plan with Microsoft, Twitter, Google and Amazon to address how technology is used to spread terrorist accounts.

Facebook has long touted an ability to catch terrorism-related content on its platform. In the last two years, the company said, it has been able to detect and delete 99 percent of extremist posts — about 26 million pieces of content — before they were reported to them.

But Facebook said that it had mostly focused on identifyin­g organizati­ons like separatist­s, Islamic militants and white supremacis­ts. The company said that it would now consider all people and organizati­ons that proclaim or are engaged in violence leading to real-world harm.

The team leading its efforts to counter extremism on its platform has grown to 350 people, Facebook said, and includes experts in law enforcemen­t, national security, counterter­rorism and academics studying radicaliza­tion.

To detect more content relating to real-world harm, Facebook said it was updating its artificial intelligen­ce to better catch first-person shooting videos. The company said it was working with American and British law enforcemen­t officials to obtain camera footage from their firearms training programs to help its AI learn what real, first-person violent events look like.

Since March, Facebook had also been redirectin­g users who search for terms associated with white supremacy to resources like Life After Hate, an organizati­on founded by former violent extremists that provides crisis interventi­on and outreach. In the wake of the Christchur­ch tragedy, Facebook is expanding that capability to Australia and Indonesia, where people will be redirected to the organizati­ons EXIT Australia and ruangobrol.id.

“We know that bad actors will continue to attempt to skirt our detection with more sophistica­ted efforts,” the company said, “and we are committed to advancing our work and sharing more progress.”

 ?? Richard Drew / Associated Press ?? The mass shooting in New Zealand “strongly” influenced its updates, Facebook said.
Richard Drew / Associated Press The mass shooting in New Zealand “strongly” influenced its updates, Facebook said.

Newspapers in English

Newspapers from United States