Los Angeles Times

Tech executives tout gains against extremist content

Facebook, Google and Twitter are getting better at removing it, lawmakers are told.

- Associated press

Facebook, Google and Twitter executives told members of Congress on Wednesday that they’ve gotten better and faster at detecting and removing violent extremist content on their social media platforms in the face of hatred-fueled mass shootings.

Questioned at a hearing by the Senate Commerce Committee, the executives said they are spending money on technology to improve their ability to flag extremist content and taking the initiative to reach out to law enforcemen­t authoritie­s to try to head off potential violent incidents.

“We will continue to invest in the people and technology to meet the challenge,” said Derek Slater, Google’s director of informatio­n policy.

The lawmakers want to know what the companies are doing to remove hate speech from their platforms and how they are coordinati­ng with law enforcemen­t.

“We are experienci­ng a surge of hate. Social media is used to amplify that hate,” said Sen. Maria Cantwell of Washington state, the panel’s senior Democrat.

The company executives testified that their technology is improving for identifyin­g and taking down suspect content faster.

Of the 9 million videos removed from Google’s YouTube in the second quarter of the year, 87% were flagged by a machine using artificial intelligen­ce, and many of them were taken down before they got a single view, Slater said.

After the February 2018 shooting that killed 17 people at a high school in Florida, Google began to proactivel­y reach out to law enforcemen­t authoritie­s to see how they could better coordinate, Slater said. Before that shooting, the suspect posted on a YouTube page, “I’m going to be a profession­al school shooter,” authoritie­s said.

Word came this week from Facebook that it will work with law enforcemen­t organizati­ons to train its AI systems to recognize videos of violent events as part of a broader effort to crack down on extremism. Facebook’s AI systems were unable to detect livestream­ed video of the mosque shootings in New Zealand in March that killed 50 people. The selfprofes­sed white supremacis­t accused of the shootings had livestream­ed the attack.

The effort will use body camera footage of firearms training provided by U.S. and British government and law enforcemen­t agencies.

Facebook also is expanding its definition of terrorism to include not just acts of violence intended to achieve a political or ideologica­l aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate. The company has had mixed success in its efforts to limit the spread of extremist material.

Facebook appears to have made little progress, for example, on its automated systems for removing prohibited content glorifying groups such as Islamic State in the four months since the Associated Press detailed how Facebook pages auto-generated for businesses are aiding Middle East extremists and white supremacis­ts in the United States. The new details come from an update of a complaint to the Securities and Exchange Commission that the National Whistleblo­wer Center plans to file this week.

Facebook said in response that it removes any auto-generated pages “that violate our policies. While we cannot catch every one, we remain vigilant in this effort.”

Monika Bickert, Facebook’s head of global policy management, said at the Senate hearing that the company has increased its ability to detect terror, violence and hate speech sooner. “We know that people need to be safe,” she said.

 ?? J. Scott Applewhite Associated Press ?? FACEBOOK’S Monika Bickert, Twitter’s Nick Pickles and Google’s Derek Slater at the Senate hearing.
J. Scott Applewhite Associated Press FACEBOOK’S Monika Bickert, Twitter’s Nick Pickles and Google’s Derek Slater at the Senate hearing.

Newspapers in English

Newspapers from United States