Daily Maverick

Starving Facebook’s toxicity machine

Whistle-blower Frances Haugen’s testimony highlights what the real problems with Facebook are and what government­s around the world must do to protect their citizens, starting with making the platform accountabl­e

- This article was produced in partnershi­p by NewsClick and Globetrott­er and was first published by New Frame. Prabir Purkayasth­a is the founding editor of NewsClick.in, a digital media platform.

Facebook has been in the limelight for two issues – both damaging from the company’s perspectiv­e, but in terms of public interest, each has its own level of usefulness.

The news item with less long-term significan­ce but more sensationa­l media appeal is that what was supposed to be a small configurat­ion change took down Facebook, Instagram and WhatsApp for a few hours on 4 October. It affected billions of users, showing the world how important Facebook and other tech giants have become to many people’s daily lives and even to the operation of small businesses.

Of course, the much more significan­t news is Facebook whistle-blower Frances Haugen, 37, a former employee of the company, who made tens of thousands of pages of its internal documents public. These documents show that Facebook’s leadership repeatedly prioritise­d profits over social good.

Facebook’s algorithms polarised society and promoted hate and fake news because it drove up “engagement” on its platforms. That the platform is tearing apart communitie­s, and even endangerin­g teens, especially girls, for not having “perfect” bodies, apparently mattered not a jot to Facebook.

The Wall Street Journal has published detailed exposés quoting Facebook’s internal documents and Haugen, who has also appeared on CBS’s 60 Minutes and in congressio­nal hearings.

“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook,” Haugen told Scott Pelley on 60 Minutes. “And Facebook, over and over again, chose to optimise for its own interests, like making more money.”

The data scientist has filed eight whistle-blower complaints against Facebook with the US Securities and Exchange Commission (SEC) with the help of a non-profit organisati­on called Whistleblo­wer Aid.

These complaints are backed by hard evidence: tens of thousands of internal Facebook documents Haugen secretly copied before leaving Facebook.

Why is this big news when these issues relating to Facebook have been raised time and again, and were more prominentl­y highlighte­d after revelation­s regarding the data firm Cambridge Analytica and Facebook became public in 2018? Did we not already know how Facebook, WhatsApp and other social media platforms have become powerful instrument­s that help promote hatred and divisive politics?

Have United Nations investigat­ors not held Facebook responsibl­e for the genocidal violence against Rohingyas in Myanmar? Were similar patterns not visible during the communal riots in Muzaffarna­gar, in the Indian state of Uttar Pradesh, in 2013 and 2017?

The big news is that we now have evidence that Facebook was fully aware of what its platform was doing. We have it from the horse’s mouth: internal Facebook documents that Haugen has made public.

By prioritisi­ng posts that promote “engagement” – meaning people reading, liking or replying to posts on Facebook, WhatsApp and Instagram – Facebook ensured that people stayed on its platform for much longer. Facebook users could then be “sold” to advertiser­s more effectivel­y by showing them more adverts.

Facebook’s business model is not promoting news or friendly chit-chat among users, or entertaini­ng people. It is selling its users to those who can sell them merchandis­e. And like Google, it has a far better understand­ing of who its users are and what they may buy.

This is what provided Facebook with 98% of its revenue in 2020 and has made it one of six trillion-dollar companies (as of September 2021) in terms of market capitalisa­tion.

Finding dangerous content

Testifying before the US Congress on 5 October, Haugen said that “Facebook uses artificial intelligen­ce to find dangerous content”, Ars Technica reported. “The problem is that Facebook’s own research says they cannot adequately identify dangerous content. And, as a result, those dangerous algorithms that they admit are picking up the extreme sentiments, the division[s].”

That this was happening is widely known and has been discussed. Facebook’s response to this criticism was that it was setting up an independen­t supervisor­y board for oversight and employing a large number of fact checkers. This and other processes would help filter out hate posts and fake news.

What the company hid was that all these actions were simply cosmetic. The driver of traffic, or what a person sees in their feed – or, in Facebook’s terms, what they engage with – is determined by algorithms.

And these algorithms were geared to promote the most toxic and divisive posts, as this is what attracts engagement. Increasing engagement is the central driver of Facebook’s algorithms and defeats any measure to detoxify its content.

Fixing the core problem

Haugen’s congressio­nal testimony also highlights what the real problems with Facebook are and what government­s around the world must do to protect their citizens.

They must make Facebook accountabl­e, not by censoring hate speech and fact-checking misinforma­tion posted by individual users, but rather by targeting its algorithms’ tendency to enable the dangerous high-engagement content.

“This is not simply a matter of certain social media users being angry or unstable, or about one side being radicalise­d against the other,” she said. “These problems are solvable… Facebook can change, but is clearly not going to do so on its own.”

While addressing Congress about what can be done to regulate Facebook nationally, Haugen also acknowledg­ed the problems that its algorithms have caused worldwide. The solution, therefore, must also be global.

In her testimony, she said that Facebook’s meagre proposed self-reforms would be insufficie­nt to make the company accountabl­e for its actions until they were made fully transparen­t. Facebook is hiding behind “safe harbour” laws that protect tech companies like it that do not generate content themselves, but provide their platform for what is called user-generated content.

In the US, it is section 230 of the Communicat­ions Decency Act that allows these tech companies to “moderate content on their services”; in India, it is section 79 of the Informatio­n Technology Act. Both countries are considerin­g reforms.

In the US, “a section 230 overhaul … would hold the social media giant responsibl­e for its algorithms”, Ars Technica reports.

In Haugen’s words: “If we had appropriat­e oversight, or if we reformed [section] 230 to make Facebook responsibl­e for the consequenc­es of their intentiona­l ranking decisions, I think they would get rid of engagement-based ranking… Because it is causing teenagers to be exposed to more anorexia content, it is pulling families apart, and in places like Ethiopia, it’s literally fanning ethnic violence.”

The key problem is not the hateful content users generate on Facebook; it is its own algorithms that drive this poisonous content to a person’s feed continuous­ly to maximise the company’s advertisin­g revenue.

Haugen added: “Facebook wants to trick you into thinking that privacy protection­s or changes to section 230 alone will be sufficient. While important, these will not get to the core of the issue, which is that no one truly understand­s the destructiv­e choices made by Facebook, except Facebook.

“We can afford nothing less than full transparen­cy. As long as Facebook is operating in the shadows, hiding its research from public scrutiny, it is unaccounta­ble. Until the incentives change, Facebook will not change. Left alone, Facebook will continue to make choices that go against the common good, our common good.”

Of course, the widespread prevalence of toxic content on Facebook’s platforms is helped by its wilful neglect of not having language classifier­s – the algorithms used to detect hate speech – for content that is not in English but created in other languages.

Even though Hindi is the third most spoken language in the world and Bengali is the sixth, according to Haugen, Facebook does not have enough “hate speech classifier­s” in these two languages.

It is well known that divisive content and fake news have more virality than any other content. And Haugen’s documents confirm what analysts, including myself, have been saying all along. The algorithms that Facebook and other digital tech companies use today do not directly code rules to drive up engagement. These companies instead use machine learning, or what is loosely called artificial intelligen­ce, to create these rules.

It is the objective – increasing engagement – that creates the rules that lead to the display of toxic content on the users’ feeds that is tearing societies apart and damaging democracy.

We now have hard evidence in the form of the leaked documents that this is indeed what has been happening. Even worse, the Facebook leadership and Mark Zuckerberg have been fully aware of the problem all along.

Selective rules

Not all the harm on Facebook’s platform, however, was caused by algorithms. From Haugen’s documents, we find that Facebook had “whiteliste­d” high-profile users whose content would be promoted even if it violated the company’s guidelines.

Millions of these special users could violate Facebook’s rules with impunity. The Wall Street Journal has provided evidence about how Facebook India protected leaders of the Bharatiya Janata Party – despite repeated red flags relating to their posts being raised inside Facebook itself.

This is not all that Haugen’s treasure trove of Facebook’s internal documents reveals. Reminiscen­t of cigarette companies’ research on how to hook children on smoking, Facebook had researched “tweens” – children in the age group of 10 to 12.

Its research was on how to hook the “preteens” to Facebook’s platforms so that it could create new consumers for its platforms. This is despite its internal research showing that Facebook’s platforms promoted anorexia and other eating disorders, depression, and suicidal tendencies among teens.

All these facts should damage Facebook’s image. But it is a trillion-dollar company and one of the biggest in the world. Its fat cash balance, coupled with the power it wields in politics and its ability to “hack” elections, provides the protection that big capital receives under capitalism.

The cardinal sin that big capital may not tolerate is lying to other capitalist­s. The internal documents that Haugen has submitted to the SEC could finally result in pushback against social media giants and lead to their regulation – if not strong regulation, at least some weak constraint­s on the algorithms that promote hate on these social media platforms.

A decade-old quote is at least as relevant now in light of these recent Facebook developmen­ts as it was when the then 28-year-old Silicon Valley tech whiz Jeff Hammerbach­er first said it: “The best minds of my generation are thinking about how to make people click ads.”

This has long been the beating drum driving the march of social media giants to their trillions.

 ?? ??
 ?? ?? By Prabir Purkayasth­a
By Prabir Purkayasth­a

Newspapers in English

Newspapers from South Africa