Khaleej Times

Why FB cannot fix its fake news problem

- Natasha tusikov

In the past week, Silicon Valley has faced renewed calls for greater regulation of social media platforms amid the growing scandal of Russian interferen­ce in the 2016 US election.

Given rising awareness of the serious problems inherent within US internet giants’ business models, it’s an opportune time to discuss how best to regulate these companies.

Internet intermedia­ries typically frame their opposition to legislatio­n as protecting freedom of expression, but no right of speech is absolute. This debate is a battle over the control over informatio­n: What data should be collected and sold, what content should be permitted online, and who should decide?

Last week the US Senate Intelligen­ce Committee grilled senior legal counsel for Facebook, Google and Twitter about their roles in Russian efforts to influence the US election.

In a startling developmen­t, after months of denying and downplayin­g their responsibi­lity, executives from each of the three companies grudgingly admitted that they had not yet determined the full extent of Russian activities on their platforms.

The executives’ testimony starkly reveals that the companies have few controls on the advertisem­ents and so-called “fake news” that they accept on their platforms.

These candid admissions from the social media giants focus muchneeded attention on the serious problems inherent with big intermedia­ries’ data-intensive business models.

As the US election scandal shows, big social media platforms not only have few safeguards to prevent the deliberate manipulati­on of informatio­n, but they also have financial interests in maintainin­g the status quo. Unfettered flows of informatio­n and unconstrai­ned advertisin­g revenue are key to their business models. And this model is tremendous­ly profitable.

Facebook’s third-quarter profit was an astounding $4.7 billion, the vast majority coming from advertisem­ents relating to its 1.3 billion average daily users. Viral stories, whether factual or false, attract clicks and advertisin­g revenue.

Given the seemingly intractabl­e challenges of regulating social media platforms, what can be done? In the US, three members of Congress have proposed a bipartisan response, the Honest Ads Act, that would require platforms to publish informatio­n about their advertiser­s and maintain a public archive of political advertisem­ents.

This is a step in the right direction, and the Canadian government should consider similar measures in advance of the 2019 federal election. Facebook has already announced a programme, the Canadian Election Integrity Initiative, to counter the spread of misinforma­tion that focuses on media literacy and training.

While these projects appear useful, they likely will do little to address the underlying problem: The bad-faith spread of online misinforma­tion. That’s because the fundamenta­l problem lies with Facebook’s business model. Efforts to constrain the flow of informatio­n, especially informatio­n that generates advertisin­g revenue, are contrary to their business model.

Facebook CEO Mark Zuckerberg, who initially dismissed claims of Russian interferen­ce as “crazy” just after the 2016 US election, said on November 1 that Facebook’s new security features to counter so-called fake news will have a “significan­t impact our profitabil­ity.”

While the big platforms can afford to take a financial hit to restore their reputation­s and work to get rid of the worst offenders, they can’t fully solve the problem without fundamenta­lly changing how — and with whom — they do business.

In response to what has become a fundamenta­l challenge to the survival of liberal democracy, Facebook, Twitter and Google have all committed to voluntaril­y implementi­ng measures to address the spread of misinforma­tion and targeting accounts that troll other users with often bigoted, racist content.

Google, for example, is creating a public database of election advertisin­g content that appears on its services. These companies prefer selfregula­tion to legislatio­n and they’ve lobbied the US Federal Election Commission in the past to have online political advertisin­g exempt from disclosure. It’s only the political pressure from the Senate inquiry that is forcing these platforms into action.

While these developmen­ts may seem like attempts by these major companies to be responsibl­e, they amount to Google et al saying: “Trust us to fix the problem we created.”

However, while Google, Facebook and Twitter are all creating algorithms to, in the words of Zuckerberg, “detect bad content and bad actors,” these algorithms operate as so-called “black boxes.” This means that the criteria the algorithms use to make decisions are off-limits to public scrutiny.

Is “trust us” a good enough response, given the problem? With so much at stake, it may be time for a fundamenta­l rethink of how these indispensa­ble 21st century companies are regulated and what they’re allowed to do.

At the very minimum, government­s and citizens should reconsider whether the lack of oversight into how these companies shape our speech rights is really in the public interest.

Social media platforms “are an enabler of democracy,” says Margrethe Vestager, the European Union’s Commission­er for Competitio­n, but we’re seeing that “they can also be used against our very basic beliefs in democracy.”

It’s time to start taking that threat to democracy seriously. —The Conversati­on

The fundamenta­l problem lies with FB’s business model. Efforts to constrain the flow of informatio­n, especially informatio­n that generates advertisin­g revenue, are contrary to their business model

Natasha Tusikov is Assistant Professor, Criminolog­y, Department of Social Science, York University, Canada

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates