The New Zealand Herald

Social media’s mixed messages

Shows of support from Facebook, Twitter, YouTube don’t address way platforms are weaponised by racists and partisan provocateu­rs, writes Kevin Roose

- — New York Times

Several weeks ago, as protests erupted across the United States in response to the police killing of George Floyd, Mark Zuckerberg wrote a long and heartfelt post on his Facebook page, decrying racial bias and proclaimin­g that “black lives matter”.

Zuckerberg, Facebook’s chief executive, also said the company would donate US$10 million ($15.8m) to racial justice organisati­ons.

A similar show of support unfolded at Twitter, where the company changed its official Twitter bio to a Black Lives Matter tribute, and Jack Dorsey, the company’s chief executive, pledged US$3m to an anti-racism organisati­on started by Colin Kaepernick, the former NFL quarterbac­k.

YouTube joined the protests, too. Susan Wojcicki, the company’s chief executive, wrote in a blog post that “we believe black lives matter and we all need to do more to dismantle systemic racism”.

YouTube also announced it would start a US$100m fund for black creators.

Pretty good for a bunch of supposedly heartless tech executives, right?

Well, sort of. The problem is that, while these shows of support were well-intentione­d, they didn’t address the way that these companies’ own products — Facebook, Twitter and YouTube — have been successful­ly weaponised by racists and partisan provocateu­rs, and are being used to undermine Black Lives Matter and other social justice movements.

It’s as if the heads of McDonald’s, Burger King and Taco Bell all got together to fight obesity by donating to a vegan food co-op, rather than by lowering their calorie counts.

It’s hard to remember sometimes, but social media once functioned as a tool for the oppressed and marginalis­ed.

In Tahrir Square in Cairo; Ferguson, Missouri; and Baltimore, activists used Twitter and Facebook to organise protests and get their messages out.

But in recent years, a right-wing reactionar­y movement has successful­ly turned the tide.

Now, some of the loudest and most establishe­d voices on these platforms belong to conservati­ve commentato­rs and paid provocateu­rs whose aim is mocking and subverting social justice movements, rather than supporting them.

The result is a distorted view of the world that is at odds with actual public sentiment. A majority of Americans support Black Lives Matter, but you wouldn’t necessaril­y know it by scrolling through your social media feeds.

On Facebook, for example, the most popular post on the day of Zuckerberg’s Black Lives Matter pronouncem­ent was an 18-minute video posted by right-wing activist Candace Owens. In the video, Owens, who is black, railed against the protests, calling the idea of racially biased policing a “fake narrative” and deriding Floyd as a “horrible human being”. Her monologue, which was shared by right-wing media outlets — and which several people told me they’d seen because Facebook’s algorithm recommende­d it to them — racked up nearly 100 million views.

Owens is a serial offender, known for spreading misinforma­tion and stirring up partisan rancour. (Her Twitter account was suspended this year after she encouraged her followers to violate stay-at-home orders, and Facebook has applied fact-checking labels to several of her posts.) But she can still insult the victims of police killings with impunity to her nearly 4 million followers on Facebook.

So can other high-profile conservati­ve commentato­rs like Terrence K. Williams, Ben Shapiro and The Hodgetwins, all of whom have had anti-Black Lives Matter posts go viral over the past several weeks.

In all, seven of the top 10 mostshared Facebook posts containing the phrase “Black Lives Matter” over the past month were critical of the movement, according to data from CrowdTangl­e, a Facebook-owned data platform. (The sentiment on Instagram, which Facebook owns, has been more favourable, perhaps because its users skew younger and more liberal.)

Facebook declined to comment. Last week it said it would spend US$200m to support black-owned businesses and organisati­ons and add a “Lift Black Voices” section to its app to highlight stories from black people and share educationa­l resources.

Twitter has been a supporter of Black Lives Matter for years — remember Dorsey’s trip to Ferguson —

but it, too, has a problem with racists and bigots using its platform to stir up unrest. Last month, the company discovered that a Twitter account claiming to represent a national antifa group was actually run by a group of white nationalis­ts posing as left-wing radicals. (The account was suspended, but not before its tweets calling for violence were widely shared.) Twitter’s trending topics sidebar, which is often gamed by trolls looking to hijack online conversati­ons, has filled up with inflammato­ry hashtags like #whitelives­matter and #whiteoutwe­dnesday, often as a result of co-ordinated campaigns by far-right extremists.

A Twitter spokesman, Brandon Borrman, said, “We’ve taken down hundreds of groups under our violent extremist group policy and continue to enforce our policies against hateful conduct every day across the world.

“From #BlackLives­Matter to #MeToo and #BringBackO­urGirls, our company is motivated by the power of social movements to usher in meaningful societal change.”

YouTube, too, has struggled to square its corporate values with the way its products actually operate. The company has made strides in recent years to remove conspiracy theories and misinforma­tion from its search results and recommenda­tions, but it has yet to grapple fully with the way its boundary-pushing culture and laissez-faire policies contribute­d to racial division for years.

As of this week, for example, the most-viewed YouTube video about Black Lives Matter wasn’t footage of a protest or a police killing, but a 4-year-old “social experiment” by viral prankster and former Republican congressio­nal candidate Joey Saladino, which has 14 million views. In the video, Saladino — whose other YouTube stunts have included drinking his own urine and wearing a Nazi costume to a Trump rally — holds up an “All Lives Matter” sign in a mainly black neighbourh­ood to prove a point about reverse racism.

A YouTube spokeswoma­n, Andrea Faville, said Saladino’s video had received less than 5 per cent of its views this year and it was not being widely recommende­d by the company’s algorithms. Saladino recently reposted the video to Facebook, where it has had several million more views.

In some ways, social media has helped Black Lives Matter simply by making it possible for victims of police violence to be heard. Without Facebook, Twitter and YouTube, we might never have seen the video of Floyd’s killing or known the names of Breonna Taylor, Ahmaud Arbery or other victims of police brutality. Many of the protests being held around the US are being organised in Facebook groups and Twitter threads, and social media has been helpful in creating more accountabi­lity for police.

But these platforms aren’t just megaphones. They’re also global, real-time contests for attention, and many of the experience­d players have become good at provoking controvers­y by adopting exaggerate­d views.

They understand that if the whole world is condemning Floyd’s killing, a post saying he deserved it will stand out. If the data suggests that black people are disproport­ionately targeted by police violence, they know that there’s likely a market for a video saying that white people are the real victims.

The point isn’t that platforms should ban people like Saladino and Owens for criticisin­g Black Lives Matter. But in this moment of racial reckoning, these executives owe it to their employees, their users and society at large to examine the structural forces that are empowering racists on the internet and which features of their platforms are underminin­g the social justice movements they claim to support.

They don’t seem eager to do so. Recently, the Wall Street Journal reported that an internal Facebook study in 2016 found that 64 per cent of the people who joined extremist groups on the platform did so because its recommenda­tions algorithms steered them there.

Facebook could have responded to those findings by shutting off groups recommenda­tions entirely or pausing them until it could be certain the problem had been fixed. Instead, it buried the study and kept going.

As a result, Facebook groups continue to be useful for violent extremists.

Last week, two members of the far-right “boogaloo” movement, which wants to destabilis­e society and provoke a civil war, were charged in connection with the killing of a federal officer at a protest in Oakland, California. According to investigat­ors, the suspects met and discussed their plans in a Facebook group. And although Facebook has said it would exclude boogaloo groups from recommenda­tions, they’re still appearing in plenty of people’s feeds.

Rashad Robinson, president of Colour of Change, a civil rights group that advises tech companies on racial justice issues, told me in an interview last week that tech leaders needed to apply anti-racist principles to their own product designs, rather than simply expressing their support for Black Lives Matter.

“What I see, particular­ly from Facebook and Mark Zuckerberg, it’s kind of like ‘thoughts and prayers’ after something tragic happens with guns,” Robinson said. “It’s a lot of sympathy without having to do anything structural about it.”

There is plenty more Zuckerberg, Dorsey and Wojcicki could do.

They could build teams of civil rights experts and empower them to root out racism on their platforms, including more subtle forms of racism that don’t involve using racial slurs or organised hate groups.

They could dismantle the recommenda­tions systems that give provocateu­rs and cranks free attention or make changes to the way their platforms rank informatio­n. (Ranking it by how engaging it is, the current model, tends to amplify misinforma­tion and outrage bait.)

They could institute a “viral ceiling” on posts about sensitive topics, to make it harder for trolls to hijack the conversati­on.

I’m optimistic that some of these tech leaders will eventually be convinced — either by their employees of colour or their own conscience­s — that truly supporting racial justice means they need to build anti-racist products and services, and do the hard work of making sure their platforms are amplifying the right voices.

But I’m worried that they will stop short of making real, structural changes, out of fear of being accused of partisan bias.

So is Robinson, the civil rights organiser. A few weeks ago, he chatted with Zuckerberg by phone about Facebook’s policies on race, elections and other topics. Afterwards, he said that he thought that while Zuckerberg and other tech leaders generally meant well, he didn’t think they truly understood how harmful their products could be.

“I don’t think they can truly mean Black Lives Matter when they have systems that put black people at risk,” he said.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from New Zealand