Facebook’s demons reappear with Alex Jones
The platform has proved unable to stop or fully address the spread of paranoia, hatred and misinformation, writes Max Fisher
To Americans, Facebook’s problem with Alex Jones might seem novel, even unprecedented. When does speech become unsafe? When can it be limited? Should those decisions be up to a private company at all? And if a company shies away from acting, as Facebook did with Jones until Apple moved first, where does that leave the rest of us?
But to activists and officials in the developing world, both the problem and Facebook’s muddled solutions will be old news.
Before there was Alex Jones, the US conspiracy theorist, there was Amith Weerasinghe, the Sri Lankan extremist who used Facebook as his personal broadcast station.
Mr Weerasinghe leveraged Facebook’s newsfeed to spread paranoia and hatred of the country’s Muslim minority. He enjoyed near-total freedom on the platform, despite repeated pleas from activists and officials for the company to intervene, right up until his arrest on charges of inciting a riot that killed one Muslim and left many more homeless.
Before there was Mr Weerasinghe, there was Ashin Wirathu, the Myanmar extremist, whose Facebook hoaxes incited riots in 2014. Three years later, Mr Wirathu would contribute to a wave of Facebook-based rumours and hate speech that helped inspire widespread violence against Myanmar’s Rohingya minority.
And so on.
“Facebook doesn’t get that they’re the largest news agency in the world,” Harindra Dissanayake, a Sri Lankan official, said a few days after Mr Weerasinghe’s arrest.
The problem, he said, goes beyond a few under-regulated extremists. It also involves the algorithm-driven newsfeed that is core to the company’s business model. “They are blind to seeing the real repercussions,” Mr Dissanayake said of Facebook’s leaders.
Developing countries’ experiences with Facebook suggest that the company, however noble its intent, has set in motion a series of problems we are only beginning to understand and that the company has proved unable or unwilling to fully address:
Reality-distorting misinformation that can run rampant on the newsfeed, which promotes content that will reliably engage users.
Extremism and hate speech that tap into users’ darkest impulses and polarise politics.
Malicious actors granted near-limitless reach on one of the most sophisticated communications platforms in history, relatively unchecked by social norms or traditional gatekeepers.
And a private company uneager to wade into contentious debates, much less pick winners and losers.
Facebook — and many Westerners — have long treated those issues as safely “over there”, meaning in countries with weaker institutions, lower literacy rates and more recent histories of racial violence. Last month, a company official, announcing new policies to restrict speech that leads to violence, referred to “a type of misinformation that is shared in certain countries”.
But chillingly similar, Facebook-linked problems are becoming increasingly visible in wealthy, developed countries like the United States. So is the difficulty of solving those problems — and the consequences of Facebook’s preference for action that can be incremental, reactive and agonisingly slow.
‘SOMETHING BAD COULD HAPPEN’
Although Facebook officials often portray the violence associated with it as new or impossible to predict, the incidents date to at least 2012. So does the pressure to more actively regulate speech on the platform.
That year, fake reports of sectarian violence went viral in India, setting off riots that killed several people and displaced thousands. Indian officials put so much pressure on Facebook to remove the posts that US officials publicly intervened in the company’s defence.
Reports of Facebook-linked violence only grew in India, and as Facebook expanded to other developing countries, similar stories followed.
“I think in the back deep-deep recesses of our minds, we kind of knew something bad could happen,” Chamath Palihapitiya, a senior executive who left Facebook in 2011, said at a policy conference last year. “We have created tools that are ripping apart the social fabric of how society works.”
There were other warnings, typically from activists or civil society leaders in the developing countries where Facebook’s expansion was fastest and most obviously disruptive. But they were little heeded.
“Facebook is the platform that we could not meet with for years,” Damar Juniarto, who leads an Indonesian organisation that tracks online hate groups, told me in March.
As a Facebook-based group called the Muslim Cyber Army organised increasingly elaborate real-world attacks, Mr Juniarto said, Facebook proved unresponsive. “How are we supposed to do this?” members of his group wondered. “Is it a form? Do we email them? We want them to tell us.”
Facebook representatives eventually met with Mr Juniarto, and the company has shut most pages associated with the Muslim Cyber Army.
Still, the episode seems to fit a pattern of Facebook waiting to respond until after a major disruption: an organised lynching, a sectarian riot, state-sponsored election meddling or, as with the so-called Pizzagate rumour pushed by Jones, a violent close call set off by misinformation.
A CORPORATE REGULATOR OF PUBLIC LIFE
In the developing countries where such incidents seem most common, or at least most explicitly violent, Facebook simply faces little pressure to act.
In Sri Lanka, government officials spoke of the company as if it were a superpower to be feared and appeased.
Tellingly, Facebook grew more proactive in Myanmar only after the United Nations and Western organisations accused it of having played a role in spreading the hate and misinformation that contributed to acts of ethnic cleansing.
Even officials in India, a major power, struggled to get the company to listen. Indian pressure on Facebook, however, has dropped since the arrival of new government leaders who rose, in part, on a Hindu nationalist wave still prevalent on social media.
US officials have far greater leverage over Facebook, as members of Congress proved when lawmakers summoned Mark Zuckerberg, its chief executive officer, to testify in April. But the Americans seem unsure what they want Facebook to do or how to compel it to act. So they, too, are not very effective at changing the company’s behavior.
More broadly, Americans seem unsure precisely how far Facebook should go in regulating speech on the platform, or what it should do about the data suggesting that misinformation is more common on the political right.
All of which comes through in Facebook’s hesitation about shutting down Jones’ page, despite his long record of demonstrable falsehoods that have realworld consequences.
MOVE FAST AND BREAK THINGS
There are growing indications Facebook’s problems in rich countries may go beyond misinformation to do the kind of harm developing countries have experienced.
Karolin Schwarz, who runs a Berlinbased organisation that tracks social media misinformation, said she believed Facebook-based rumours about refugees could be fuelling the spate of hate crimes against them.
“I think it does something to their sense of community,” she said. “These things, if they reach thousands of people, you cannot get it back.”
The platform has grown so powerful, so quickly, that we are still struggling to understand its influence. Social scientists regularly discover new ways that Facebook alters the societies where it operates: a link to hate crimes, a rise in extremism, a distortion of social norms.
After all, Jones, for all his demagogic skills, was tapping into misinformation and paranoia already on the platform.