Gulf News

User data and trust has been exploited

- By Zeynep Tufekci

In 2014, Cambridge Analytica, a voter-profiling company that would later provide services for Donald Trump’s 2016 presidenti­al campaign, reached out with a request on Amazon.com’s “Mechanical Turk” platform, an online marketplac­e where people around the world contract with others to perform various tasks. Cambridge Analytica was looking for people who were American Facebook users. It offered to pay them to download and use a personalit­y quiz app on Facebook called this is your digital life.

About 270,000 people installed the app in return for $1 (Dh3.67) to $2 per download. The app “scraped” informatio­n from their Facebook profiles as well as detailed informatio­n from their friends’ profiles. Facebook then provided all this data to the makers of the app, who in turn turned it over to Cambridge Analytica.

A few hundred thousand people may not seem like a lot, but because Facebook users have a few hundred friends each on average, the number of people whose data was harvested reached about 50 million. Most of those people had no idea that their data had been siphoned off (after all, they hadn’t installed the app themselves), let alone that the data would be used to shape voter targeting and messaging for Trump’s presidenti­al campaign.

This weekend, after this was all exposed by The New York Times and The Observer of London, Facebook hastily made a public announceme­nt that it was suspending Cambridge Analytica (well over a year after the election) and vehemently denied that this was a “data breach”. Paul Grewal, a vicepresid­ent and deputy general counsel at Facebook, wrote that “the claim that this is a data breach is completely false”. He contended that Facebook users “knowingly provided their informatio­n, no systems were infiltrate­d, and no passwords or sensitive pieces of informatio­n were stolen or hacked”. He also said that “everyone involved gave their consent”.

Grewal is right: This wasn’t a breach in the technical sense. It is something even more troubling: an all-too-natural consequenc­e of Facebook’s business model, which involves having people go to the site for social interactio­n, only to be quietly subjected to an enormous level of surveillan­ce. The results of that surveillan­ce are used to fuel a sophistica­ted and opaque system for narrowly targeting advertisem­ents and other wares to Facebook’s users. Facebook even creates “shadow profiles” of non-users.

Despite Facebook’s claims to the contrary, everyone involved in the Cambridge Analytica data sip honing incident did not give his or her “consent” — at least not in any meaningful sense of the word. It is true that if you found and read all the fine print on the site, you might have noticed that in 2014, your Facebook friends had the right to turn over all your data through such apps. (Facebook has since turned off this feature.) If you had managed to make your way through a bewilderin­g array of options, you might have even discovered how to turn the feature off.

This wasn’t informed consent. This was the exploitati­on of user data and user trust.

Let’s assume, for the sake of argument, that you had explicitly consented to turn over your Facebook data to another company. Do you keep up with the latest academic research on computatio­nal inference? Did you know that algorithms now do a pretty good job of inferring a person’s personalit­y traits, sexual orientatio­n, political views, mental health status, substance abuse history and more just from his or her Facebook “likes” — and that there are new applicatio­ns of this data being discovered every day?

Given this confusing and rapidly changing state of affairs about what the data may reveal and how it may be used, consent to ongoing and extensive data collection can be neither fully informed nor truly consensual — especially since it is practicall­y irrevocabl­e. What did Cambridge Analytica do with all the data? With whom else might it have shared it? In 2015, Facebook sent a stern letter to Cambridge Analytica asking that the data be deleted. Cambridge Analytica employees have said that the company merely checked a box indicating that the data was deleted, at which point Facebook decided not to inform the 50 million users who were affected by the breach, nor to make the issue public, nor to sanction Cambridge Analytica at the time. The New York Times and The Observer of London are reporting that the data was not deleted.

Should we all just leave Facebook? That may sound attractive but it is not a viable solution. In many countries, Facebook and its products simply are the internet. Some employers and landlords demand to see Facebook profiles, and there are increasing­ly vast swaths of public and civic life — from volunteer groups to political campaigns to marches and protests — that are accessible or organised only via Facebook.

The problem here goes beyond Cambridge Analytica and what it may have done. What other apps were allowed to siphon data from millions of Facebook users? What if one day Facebook decides to suspend from its site a presidenti­al campaign or a politician whose platform calls for things like increased data privacy for individual­s and limits on data retention and use? What if it decides to share data with one political campaign and not another? What if it gives better ad rates to candidates who align with its own interests?

A business model based on vast data surveillan­ce and charging clients to opaquely target users based on this kind of extensive profiling will inevitably be misused. The real problem is that billions of dollars are being made at the expense of the health of our public sphere and our politics, and crucial decisions are being made unilateral­ly, and without recourse or accountabi­lity.

Zeynep Tufekci is associate professor at the School of Informatio­n and Library Science at the University of North Carolina.

Newspapers in English

Newspapers from United Arab Emirates