Friend that uses you
frowns, recent artificial intelligence developments are analysing subtle and nearly impossible to suppress micro-expressions that last only a fraction of a second. Because microexpressions can reveal emotions that people may be trying to hide, recognising micro-expressions can be advantageous for intelligence agencies by providing clues to predict dangerous situations. Or it could be used by Facebook and nefarious governments to manipulate millions of people.
Regardless of the application, the results would be a total loss of autonomy. People’s philosophical views on the degrees of individual freedom or agency might not align, but surely, if anything, none of us wishes to be manipulated.
Facebook’s rise to the top was by no means accidental. Its unprecedented size, demography and the ease with which we can transmit information allow the platform to run social experiments on its users. Over the years, the social network has refined its choice architecture to hijack users’ psychological vulnerabilities with deceptive design and psychological nudges.
“Dark” design patterns are crafted to steer users away from data protection and towards spending more time online. The colour, size and wording of interfaces all contribute to giving users the illusion of control. Accepting data collection is a single click facilitated by bright colours, with big buttons and simple text. Managing your data, in contrast, is a multistep process designed to overwhelm users with granular choices and wording that suggests a loss of account functionality or deletion if users tamper with the default settings. The phenomenon even has a name: the “control paradox”.
Here’s Facebook’s pitch for its intrusive face-recognition feature: it “lets us know when you’re in other photos or videos so that we can create a better experience”. By framing the use of face recognition in a solely positive manner, deliberately leaving out any possible negative consequences, Facebook nudged users toward enabling the option without fully informing them.
Dark patterns are described in the Deceived by Design report as “ethically problematic, because they mislead users into making choices that are not in their interest and deprive them of their agency”.
The control paradox is by no means the only psychological quirk for the social network to exploit. The fields of behavioural economy and psychology describe how users’ decision-making and behaviour could be influenced by appealing to their psychological biases. Studies found that individuals overestimate their ability to make unadulterated decisions.
It’s somewhat more common for individuals to be in a constant flux between states of rationality and cognitive fallibility. But most of us believe we are more rational than the average individual, fittingly endorsing the Dunning-kruger effect that most people overestimate their abilities.
For example, individuals create temporary preferences for small rewards that occur sooner rather than for more substantial long-term gains, and prefer choices and information that confirm our pre-existing beliefs. Facebook exploits these and other human tendencies and triggers such as social approval, the need to belong, the fear
of missing out, intermittent variable rewards, reciprocal expectations and other biological vulnerabilities to keep users hooked on the platform.
Sandy Parakilas, a former Facebook operations manager, says the company is generating economic value by using data about you “to predict how you’re going to act and manipulate you”.
Jennifer King, the director of consumer privacy at the Centre for Internet and Society at the Stanford Law School, echoed a similar view. “As long as Facebook keeps collecting personal information, we should be wary that it could be used for purposes more insidious than targeted advertising, including swaying elections or manipulating users’ emotions,” she told The New York Times.
If the neo-luddite tone of this article appears simplistic, you’re right.
Facebook’s algorithms are optimised to exploit what traditional media has done for centuries. Its adsupported business model competes for our finite attention by optimising negative emotions such as outrage and hate in a zero-sum race to the bottom. The saying goes: “If it bleeds, it leads.”
Even if users are interested in a broad range of news from different political preferences, Facebook’s algorithms will favour articles that confirm political prejudices. The fact of the matter is that negative emotions are more accessible and therefore more cost-effective.
How then is Facebook any different from traditional media or other technology companies?
Besides the business model that underpins the company’s every decision, it is also the most powerful communications and media company in the world according to every available measure. Robyn Caplan, a research analyst at Data & Society, points out that Facebook has no rival in size, popularity and functionality. When Facebook introduced its News Feed in 2006, it blindsided its users by departing from connecting friends to controlling what friends see.
Fast forward 10 years and it’s more evident than ever. News Feed’s filtered stream of social content has captured the market and has emerged as the most significant distributor of news in the world.
Arguably, none of this is unusual. Traditional mainstream media also has considerable influence; however, the differences are significant. In addition to being constrained by rigorous industry rules and norms, competition in the mainstream press allows for content to be comparatively assessed across different news outlets for possible bias. Potential prejudice is negated by regulations that limit the power, reach and ownership of any single outlet, thus safeguarding the diversity of content.
The personalisation of Facebook’s News Feed makes these comparative studies nearly impossible. Even if you could establish an information pattern for Facebook’s users, what would you compare it to?
Zuckerberg’s puzzling testimony before the United State’s House of Representatives and Senate attempts to dismiss the monopoly label. He said: “Consumers have lots of choices over how they spend their time.”
By this logic, reporter Paul Blumenthal notes that “Facebook can never be a monopoly in Zuckerberg’s eyes because its competition is every other form of human activity” and by this measure its “biggest competitors are work and sleep”.
Technology companies have managed to convince people that algorithms produce some kind of data-mined objective truth unadulterated by human fallibility. This is not the case; humans are involved in every step of the process. From the initial training data provided by humans, designing the models, analysing and tweaking the results and so forth, all these boundary conditions are set by humans, whose conscious and unconscious biases may be expressed in the results.
Possible malfeasance or corruption aside, the challenges for the “many good people working there” at Facebook to create a platform that resembles neutrality is difficult. Algorithms are being trained on our past behaviours to predict our future. By definition, the past is not the future. Societies and individuals are constantly changing, therefore training data needs to represent the current population and account for gradual social drifts in the future. We have to acknowledge that even well-designed algorithms could have profound implications for society. An algorithm that is designed to connect like-minded people will at the same time isolate them.
The reality is that the majority of people are far less likely to engage with viewpoints that challenge their preconceived views even in the absence of social media. If the polarisation of communities is a product of our biology, then perhaps social media companies’ neutral platform defence of merely tracking user preference and connecting likeminded people is credible.
But algorithms are not passively monitoring users’ preferences; they actively steer behaviour and thoughts. Measured online conversations are expedited to the fringes and drowned out by radicalised views fomented by unsubstantiated rumours, mistrust and paranoia. Echo chambers are not a result of free association based on the false premise of platform neutrality; it’s the result of optimising outrage for profit. Facebook knows that outraged users are engaged users. Digital misinformation has become so pervasive online that the World Economic Forum has classified it as one of the biggest threats to our society.
Unfortunately, because of the mismatch between the speed of technological development and the gradual grind of accountability, whether it’s morals, ethics or laws, it is possible for technology companies to exploit the technology landscape unchecked. Regulation cannot keep up with the speed of invention and, when it does catch up, companies find ways to circumvent new laws.
A case in point: it took years for the government and the public to begin to understand that Facebook was mining vast datasets of users. Facebook’s motto, “move fast and break things”, attests to an attitude of arrogant carelessness. It is quite happy to ask for forgiveness rather than permission.
Facebook has indicated some willingness to change by adjusting the News Feed algorithm to address these issues and prioritise posts from friends and family over viral videos, news and other content. Zuckerberg announced a significant overhaul of Facebook’s News Feed algorithm that would prioritise “meaningful social interactions” over “relevant content” after pledging to spend 2018 “making sure that time spent on Facebook is time well spent”.
Perhaps these changes should carry more weight, or does it deserve the equivalent blip of attention Zuckerberg has given them. I’m not buying what you are selling. Sam Lester, a consumer privacy fellow at the Electronic Privacy Information Centre, points out that we are “looking to the company that caused these problems to fix them”.
Facebook cannot close the Pandora’s box it opened a decade ago, allowing external apps to collect user data indiscriminately. The public may never know the extent to which these companies have copied and shared their personal information with potentially nefarious and destructive forces. We should not conflate our understanding of the natural world with its digital counterpart. Deleting information online is not equivalent to burning a note.
Our public understanding of humanity’s potential for change has produced laws that honour redemption in the real world by clearing a person’s record after a fixed period of time. The right to be forgotten, however, does not pertain to decentralised digital information that could easily be shared and stored on millions of devices. Our choices today will affect the rest of our lives and those of the next generation.
We are like naive children hiding behind our hands, and the grown-ups are perfectly content to play along.
Social media itself isn’t going away. It has become an integral part of our lives, satisfying a basic human need to connect and share information. Yes, we have come to depend on social networks, but should we accept our virtual makeshift community? Are we destined to wander the virtual identities of friends and pseudo-friends, projecting idealised versions of themselves, making us feel inadequate and mediocre? The already muddied water between fiction and reality will become even more ambiguous in the future when artificial intelligenceenhanced video and audio forgeries become commonplace.
The human mind is incredibly susceptible to forming false memories. This tendency will only exasperate with artificial intelligence-enhanced forgeries on the internet, where false ideas spread like viruses among likeminded people.
A big part of the danger of this technology is that, unlike older photo and video editing techniques, it will be more widely accessible to people without great technical skill. “I’m more worried about what this does to authentic content,” said Hany Farid, a professor of computer science at Dartmouth College. “Think about Donald Trump. If that audio recording of him saying he grabbed a woman was released today, he would have plausible deniability.”
You should delete Facebook for the reasons already mentioned, but you wouldn’t because of them. We already ran this experiment. No sooner than #Deletefacebook went viral earlier last year, droves of users signed back up. Users have come to rely on the platform to socialise, organise, procrastinate and hide behind virtual identities.
Conceivably the most common reason individuals joined Facebook in the first place is to connect with friends. Without any social engineering on its part, initially at least, Facebook was able to convince users to share personal information by connecting friends who trust each other.
Sharing information online is not novel, but creating an environment to share personally identifiable information is. Perhaps users do not trust Facebook inherently, but they inherently trust their friends, and by proximity conflate the two. If the word “trust” raises more questions than answers, please substitute with familiarity. You joined Facebook because it feels familiar and you stay or come back for the same reason.
Humans are intrinsically social animals and seek out intimacy, whether we want to or not. “A considerable part of Facebook’s appeal stems from its miraculous fusion of distance with intimacy, or the illusion of distance with the illusion of intimacy.” — Stephen Marche wrote in The Atlantic online magazine.
Antonio García Martínez, author and tech engineer who formerly worked at Facebook, elaborates on the illusion, describing Facebook as a cheap digital knock-off; Facebook is to real community what porn is to real sex. “Unfortunately, in both instances use of the simulacrum fries your brain in ways that prevent you from ever experiencing the real version again.”
Individuals are always going to be at a disadvantage given the information asymmetry that exists between Facebook and its users. Tim Wu, the author of The Attention Merchants, outlines a potential path forwards. “What we most need now is a new generation of social media platforms that are fundamentally different in their incentives and dedication to protecting user data.”
The French cultural theorist Paul Virilio, best known for his writings about technology, appropriately stated that “the invention of the ship was also the invention of the shipwreck”, describing the inevitable cost that is associated with progress.
His eloquent explanation of casualty permeates almost every aspect of Facebook. Facebook, and social networks like it, will indeed provide the makeshift community for those whose worlds are being destroyed around them, and at the same time provide a megaphone for the destroyers.
The suggestion is that we are dealing with an immovable force. It’s surely true considering Facebook’s social media monopoly and power to influence billions of people daily. Believing that we are somehow immune to the platform’s psychological nudges is naive, and the sooner we accept its absolute power, the sooner we can choose to move on.
Facebook may be one of the first social media companies to emerge alongside the internet; it need not be the last. Facebook is the sum of its users; you are Facebook, and you could also choose not to be it. The critical mass of users that ensured the rapid network effect can also be a powerful driving force in the opposite direction.
Baratunde Thurston, an adviser at Data & Society, says: “Since companies value us collectively, we must restore balance with a collective response that is based on the view that we’re in this together ; that our rights and responsibilities are shared.”
Pieter Henning is an artist and designer who lives and works in Cape Town. Follow him on Twitter @P_d_henning
Huge concern: Facebook, not surprisingly, but also not exclusively, already works with third-party data brokers to merge users’ online activity and profiles with offline behaviour. Photo: Frank Hoermann/afp
No likes: The group ‘Raging Grannies’ called for better consumer protection and online privacy in the wake of the Cambridge Analytica’s access to users’ data. Photo: Justin Sullivan/getty Images/afp