How fear of losing advertising dollars forced a response
It took less than two hours for Facebook to react – but react it did, and with good reason. At 5pm on Friday, Unilever, one of the world’s largest advertisers – with a portfolio of products that ranges from Marmite to Vaseline – suddenly announced it was pulling all advertising from Facebook, Instagram and Twitter in the US.
Given the “polarised atmosphere in the US”, Unilever said, and with the significant work left to be done “in the areas of divisiveness and hate speech … continuing to advertise on these platforms at this time would not add value to people and society”.
At 6.47pm, Facebook scrambled. Mark Zuckerberg, the company said, would be “going live on his Facebook page” to discuss the company’s racial justice work – and 13 minutes after that, the world’s most powerful chief executive appeared on screens.
Appearing humbled, he announced a series of new policies, including a ban on hateful content that targets immigrants, and further restrictions on posts making false claims about voting.
Asad Moghal, senior digital and content manager at Byfield Consultancy, said Unilever’s action was always going to force Zuckerberg to respond.
“When such an international giant decides that inaction is no longer an option to tackle racist and discriminatory language, then the social media businesses need to listen up. By taking financial action, a company the size of Unilever can effect change and force the hand of Twitter and Facebook – the business has decided it needs to protect its brand reputation and can no longer be associated with platforms that deliver hate speech and divisive content. But what will really effect change is if this move creates a domino effect and other big-name corporations remove investment from the platforms.”
The swathe of announcements from Zuckerberg marked the first concessions from Facebook towards the aims of a US coalition, Stop Hate for Profit, that was formed in the wake of the killing of George Floyd in May by Minneapolis police.
But the group’s leaders say Facebook’s tweaks do not go far enough, and reiterated their calls for a month-long global advertiser boycott starting tomorrow.
The real danger for Facebook is if other brands decide they can do without the platform too.
This crisis has been a long time in the making – and shows no sign of going away.
Facebook has historically taken a less censorious approach with hate speech than it has with other controversial areas, such as nudity – in part out of a belief in the inherent ambiguity of offensive speech, and in part due to the difficulty of automating such work. Identifying hate speech is reliant on knowledge of context, custom and culture which can be hard to teach human moderators, let alone machines.
In recent years Facebook has made strides in the area. In the third quarter of 2017, according to its community standards report, the social network found just under a quarter of incidences of hate speech
‘Unlike Twitter … we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy’
by itself; the other three-quarters was only removed after site users manually flagged it to moderators, who then took action. By this spring, the proportions had reversed: 88% of removed hate speech was found by Facebook’s own tools, allowing it to remove or restrict almost four times as much material.
But working against Facebook’s technical expertise was another factor: the US president.
As far back as 2015, according to the Washington Post, Facebook was struggling with how to deal with a man who, first as a candidate and then as president, pushed the limits of what was allowed to be posted.
Instead, Facebook has steadily tweaked its own rules to avoid angering Donald Trump: introducing in 2015 an exception for “political discourse” to allow a video calling for a ban on Muslims entering the US to stay up, for instance, or limiting efforts to tackle “false news” out of a fear that doing so would disproportionately hit right-leaning pages and posters.
In the protests after Floyd’s death, Trump again tested the boundaries, posting on Facebook and Twitter a message that “when the looting starts, the shooting starts”.
Twitter, noting the racist history of the phrase, and interpreting it as a potential call for violence, restricted the tweet, preventing it from being replied to or liked, and hid it behind a warning declaring that it broke its rules. But it left the tweet up, citing the inherent newsworthiness of a statement by an elected official with millions of followers.
On Facebook, however, the post was untouched. In a post on his personal page, Zuckerberg said he interpreted the statement not as an incitement to violence but as “a warning about state action”.
“Unlike Twitter,” he wrote, “we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician.”
The decision became a flashpoint for lingering unease about Facebook’s wider issues with tackling hate – as did Zuckerberg’s decision, a week earlier, to appear on Fox News to defend a different Trump post, on mail-in voting, saying he did not think his company should become the “arbiter of truth”.
Facebook staff began to speak out on social media, holding a virtual walkout to emphasise that “doing nothing is not acceptable”.
The firm’s precariously employed moderators joined in, risking their contracted-out jobs to decry the “white exceptionality and further legitimisation of state brutality”.
Even scientists funded by Zuckerberg’s personal charity, the Chan Zuckerberg Initiative, spoke out, calling Trump’s post “a clear statement of inciting violence”.
Then in May, with some fanfare, Zuckerberg appointed an oversight board – a roster of experts that will have the power to overrule Facebook’s moderation decisions.
It include Helle ThorningSchmidt, the former prime minister of Denmark, the Nobel peace laureate Tawakkol Karman, and Alan Rusbridger, the former editor-in-chief of the Guardian. But the difficulty of setting up a new organisation in the age of coronavirus means that the board was unable to take the heat off Zuckerberg.
“Zuckerberg’s strategy of dealing with Trump is an incoherent blend of two leadership approaches,” said Chris Moos, a leadership expert and teaching fellow at Oxford University’s Saïd business school.
Where some attempt to find “practical approaches for dealing with tensions” they encounter at work, and others appeal “to higherorder principles”, Zuckerberg tries both and succeeds at neither, he said. “On the one hand, he has engaged a wide set of stakeholders into the debate, throwing money at initiatives to build racial justice and voter engagement. On the other, the Facebook CEO has tried to rise above the controversy by making it clear that his company will be erring on the side of free expression, ‘even when it’s speech we strongly and viscerally disagree with’.”
Zuckerberg can never be removed from his position. While he owns only 14% of the company, the special class of shares he holds means he controls 57% of the voting rights at board meetings. But employee pressure can hurt him, professionally and personally: if Facebook no longer seems like a pleasant and rewarding workplace, the company will struggle to hire and retain the highly skilled staff it relies on to compete in Silicon Valley.
In June, the Stop Hate for Profit campaign found another weak point for the site: advertisers. While Facebook takes some revenue directly from users, the vast majority of the company’s $70.7bn (£57bn) annual revenue comes from advertising.
On 17 June, Color of Change – the organisation behind Stop Hate for Profit – launched a public request: for “all advertisers to stand in solidarity with Black Facebook users and send the message to Facebook that they must change their practices by pausing all advertising on Facebook-owned platforms for the month of July 2020.”
Many of those advertisers were already uncomfortable about their spend on Facebook before the latest campaign. The site, as with all programmatic advertising, can have “brand safety” issues when companies find their messages next to extreme or hateful content.
At a macro level, meanwhile, marketers are all too aware of the risks of helping consolidate the “duopoly” of Facebook and Google, who between them have secured the majority of the advertising industry’s growth.
But even if the Stop Hate for Profit campaign was pushing at an open door, the success has been surprising. By the end of the first week, Patagonia, North Face and freelancing platform Upwork had signed on. And Unilever’s decision on Friday to pause advertising until November – albeit only within the US, and without directly citing the campaign – opened the floodgates.
Over the weekend, it was joined by other megabrands, including Coca-Cola and alcohol conglomerate Beam Suntory.
“Let’s be honest,” said Moghal, “these tech platforms have generated income and interest from this divisive content. They won’t change their practices until they begin to see a significant cut to their revenue.”
With the boycott officially starting tomorrow, the campaigners are not easing up. In fact, success has only driven higher ambitions.
“The next frontier is global pressure,” Jim Steyer, the chief executive of Common Sense Media, a nonprofit organisation, told Reuters yesterday.
While some advertisers, including North Face and Patagonia, have expanded their boycotts globally, others are currently content to only withhold spending in the US.
If even that is enough to get Zuckerberg in front of a camera in less than two hours, the campaigners hope worldwide action could motivate lasting change.
Donald Trump poses at a Washington church days after posting on Facebook that ‘when the looting starts, the shooting starts’
A protest in Minneapolis over George Floyd’s death. Stop Hate for Profit was formed after the killing
▼ Mark Zuckerberg on stage in California. It took Facebook’s CEO two hours to react to Unilever’s move