The Behavioural Science of Online Manipulation
Organizations have two choices: exploit human biases or ‘nudge for good’. The latter will shape the evolution of the digital world — and build fairer, better markets.
that the Internet has transformed the way NO ONE WOULD ARGUE we live, work and play. Today you can make ‘friends’ around the world without ever saying hello; compare products from dozens of shops without leaving the house; or plan a date with a stranger without breaking a sweat. But while the benefits of life online are significant, so too are the economic and social costs.
By now, most readers are familiar with the term ‘nudge’; but there is a darker side to the evolution of online and digital markets: ‘Sludge’ and other techniques seek to harness our behavioural biases against us rather than for us.
Nobel Laureate in Economics Richard Thaler, a longstanding adviser to the Behavioural Insights Team, often signs his books with the maxim ‘Nudge for good’. In this article we will explore some of the ways in which organizations might seek to enact this sentiment in the online world to shape its evolution — and build fairer, better online markets.
Exploiting Consumer Biases
In the same way that our behaviour in the offline world is sensitive to subtle cues from the environment, our behaviour online is shaped by the design and characteristics of the websites, platforms and apps we interact with. Nobel Laureates Robert Shiller, George Akerlof and Thaler have written about how companies seek to manipulate and exploit our cognitive biases and psychological weaknesses. Two types of manipulation identified by them are particularly common: The deliberate exploitation of information deficits and behavioural biases (‘phishing’); and the deliberate addition of frictions and hassles to make it harder to make good choices (‘sludge’). Let’s take a closer look at each.
Companies have long used techniques to target and PHISHING. exploit our psychological weaknesses. Shiller and Akerlof call this phishing. Offering a cash-back deal is a classic example. The vast majority of us erroneously predict that we will go to the effort of actually claiming the money. Our purchase decisions are more sensitive to the promise of the cash than to the effort required in actually claiming it, while redeeming the cash-back offer is driven more by the effort involved than the cash available. Sellers can experiment to find the optimum level of cash-back that will tempt you, but not be so large that you will actually complete the claim.
Today, a shopper may subscribe to Amazon Prime, believing that they are saving money via free shipping — when in reality they are just being tempted to buy more. In fact, the longer someone is a Prime member, the more they spend on the platform. In the past, we had no way of knowing whether we were fast enough to be in the ‘first 50 responders’ to claim a free gift or discount. But today we are likely to trust real-time updates, for example, from hotel-booking sites urging that there is ‘only
one room left at this price’ or ‘five other people are looking at this room right now’, especially if they have already shown you all the sold-out hotels in your initial search results. This is designed to harness our sense of scarcity and our desire to see ‘social proof ’ that other people are making choices similar to ours. What they don’t tell you is that those five people aren’t necessarily looking at the same room as you.
The intersection of data, imbalances of information and intelligent marketing also opens up new opportunities to exploit our biases. Conspiracy theorists might claim that our devices are ‘listening’ to our conversations, when in fact the data we willingly share is more than enough to predict what we might buy, when, how much we are willing to pay and the flawed decisions we might make along the way. Further, the data we share online also allows companies to move away from the model of active, consumer-led searches and towards prompting us with targeted advertising and information at times when we are most vulnerable to being manipulated. For example, the rise of ‘femtech’ apps to track menstrual cycles and fertility has allowed businesses to collect large and robust data sets to produce tailored and timely marketing. More disturbingly, social media platforms’ ability to identify users’ emotional states based on their activity means that — in the alleged words of Facebook staff — advertisers now have the capacity to identify when teenagers feel insecure or need a ‘confidence boost’.
The conscientious nudger seeks to design systems and SLUDGE. processes to help people make better choices for themselves — for instance, by using defaults that make it easier to save, rank deals based on best value and tell us what other people like us are doing. Thaler coined the term ‘sludge’ to describe the deliberate use of these same tactics to thwart us acting in our best interests.
There is plenty of sludge offline — notably in administrative paperwork burdens — but online environments allow sludge to thrive. You can set up an online news subscription with one click, yet when you go to cancel it, you’ll find yourself in a complicated loop of online and offline hurdles: Calling a phone number during restricted business hours, filling out a detailed form and posting it to the company, or even being required to visit a physical store. These muddy frictions are deliberate. Companies know that they matter.
Online airline bookings are also fertile ground for sludge. An enticingly-low flight cost and large ‘book now’ button is often followed by a series of pages slowly adding on additional choices, information and costs. When you get to reserving your seat, you discover that you can pay online or if you want to be randomly allocated a free seat, you’ll need to queue at the check-in desk on the day of the flight. All designed to sell you more and discourage you from choosing free or cheaper options.
Of course, friction can also be dialled-down or removed from processes to encourage behaviour that is self-defeating. You can be approved for a high-cost payday loan with just a few clicks, and shopping or gambling online at 3 a.m. is easy to start and difficult to stop. The practice on Netflix and other streaming platforms to automatically start the next episode in a series is a clear example of the power of removing friction, or flipping the default: People fail to press stop, and thereby watch another episode they might not have watched if they had to press play.
Morals, Ethics and Social Networks
It is undeniable that digital markets are changing the nature of the economy. Yet as Adam Smith exposed, there is more to a market than money. Markets are entwined with ‘moral sentiments’, social networks and a myriad of habits and customs that shape our lives and societies.
‘Social capital’ refers to our connections to others and the level of trust, informal rules and common understandings that facilitate communication and exchange within those networks. It helps information flow, lowers transaction costs and drives fairer exchange. Online markets — and social media in particular — have the potential to significantly affect the character and form of our social capital — for example, by creating ‘echo chambers’ that fool us into thinking our social and political ‘bubbles’ are representative.
Nudge co-author Cass Sunstein has written about a phenomenon that exacerbates these echo chambers: ‘asymmetrical updating’. This is a strong tendency to favour evidence that confirms our beliefs and to ignore or misread new evidence that does not. This tendency has been observed in beliefs about topics such as climate change, the death penalty, affirmative action, and sexism in male-dominated subjects and industries.
Social media helps asymmetrical updating thrive, since many now use it as a primary news source without noticing how their choices about who to follow have fundamentally altered the balance of information they receive. Website algorithms curate what we see and prompt us where to go next based on our past
The intersection of data, imbalances of information and intelligent marketing opens up new opportunities to exploit our biases.
usage. The powerful logic of ‘people who liked this also liked that’ takes us ever deeper into the bubble.
Algorithms can also create or destroy social capital more directly by drawing on personal data to exclude people from opportunities. For example, a babysitting app that uses Facebook news feeds to rate the suitability of babysitters on the basis of whether they’ve discussed drugs or seem to have a ‘bad attitude’ online.
Studies have already highlighted examples where online transactions and interactions are leading to new forms of discrimination and disadvantage. While platforms like ebay have persisted with pseudonyms, others have encouraged users to provide information about themselves to build trust and, over time, reputation. For example, Airbnb requires real names and it has been shown that guests with distinctively African American names are 16 per cent less likely to be accepted compared to identical guests with distinctively white names; and male guests in implied same-sex relationships were up to 30 per cent less likely than other couples to have their bookings accepted by Airbnb hosts. The discrimination was not in the form of outright rejections but more insidious, with hosts failing to respond at all.
But the impacts on social capital and cohesion are not all negative. Digital markets have also been able to supplement and enhance society’s stock of social capital. Platforms such as ebay have made it possible to trust relative strangers; Linkedin can extend the ‘weak ties’ that play a key role in getting a job; Facebook has made it easy to stay in touch with old friends; and sites like Tripadvisor create a much wider network than word-ofmouth recommendations.
A related concern is how the evolution of online environments is affecting the character of our social relationships and communication with others. Most obviously, anonymized communication reduces the reputational costs usually associated with negative behaviour and exchanges. This can lead to more aggression (such as anonymized subjects administering more pain on others) and dishonesty, though it can also leave more room for self-expression and the ability (for some) to resist problematic social norms. There is also some evidence that the characteristics that people adopt in online avatars ‘leak’ into their offline behaviour, for good or bad.
A number of social media sites and online marketplaces are wrestling with this issue, and hate-speech laws pick up on some of it. But the majority of the time online incivility occupies a grey area between the law and public acceptability of hurtful comments, inaccurate reviews, bullying and shaming. A survey commissioned by Amnesty International in Australia found that nearly half of women aged 18 to 24 had experienced online harassment, including sexist abuse, ‘trolling’ (posting deliberately offensive or provocative content), threats of sexual violence, the posting of intimate pictures online without consent and ‘doxxing’ (the sharing of identifiable details). It is clear that a selfregulatory dynamic is not enough.
One of the knock-on consequences of all of this is widely thought — though not yet proven — to be on mental health. New evidence suggests two potential factors at play: negative feelings due to hurtful interactions or negative content, and substituting time away from activities that enhance well-being, such as offline socializing.
In one recent large-scale field experiment, researchers found that Facebook users who deactivated their accounts for four weeks spent less time online, reduced political polarization (although at the expense of factual news knowledge) and, most crucially, increased subjective well-being. This increase was ‘small but significant’, in particular on self-reported happiness, life satisfaction, depression and anxiety, and the size of the effect on overall subjective wellbeing was equivalent to about 25 to 40 per cent of the effect of interventions such as therapy.
Of particular concern is the impact of social media on the mental health of younger people. A recent survey of people aged 14 to 25 conducted by the Royal Society of Public Health comparing the five most popular social media platforms on a range of positive and negative mental health outcomes, found that Youtube was the most positive, for example by raising awareness and self-expression, while Instagram was the most negative — via its impact on body image and fear of missing out.
These findings are not trivial. Young adults are currently estimated to be spending an average of four hours online every day, and this shift has coincided remarkably with a rise in mental health conditions in teenage girls, including depression, anxiety disorders and self-harm. While each individual case of mental illness is complex, there is increasing evidence that social media is at the very least associated with these issues. One study analyzed survey data from 10,904 14-year-olds in the UK to explore the relationship between social media use and depressive symptoms. It found, in particular for girls, that greater social media use was associated with online harassment, poor
sleep, low self-esteem and poor body image, and that these were in turn linked to depressive symptoms.
What Can, And Should, We Do?
An effective response to these challenges must make good use of the entire regulatory toolkit and also build on it to introduce new ways of shaping markets. And not just government and regulators: industry has a key role to play. Following are a few principles to consider.
More focus needs to be placed on nudging and prodding companies to change their behaviour.
There are many opportunities to improve SMARTER DISCLOSURES. the information being provided to consumers online — from how their data is being collected, to whether algorithms are being used to make decisions, to a website’s terms and conditions (T&CS) — and to vary when and how this information is presented. Where there is no clear regulatory power to compel companies to provide information in a particular format, these smarter disclosures can be encouraged through best practice guides, codes of conduct and reporting requirements.
Our colleagues recently concluded a series of online experiments testing ways of applying behavioural science to improve consumer comprehension of (and engagement with) online T&CS and privacy policies. They found that telling customers how long a privacy policy normally takes to read increased openrates by 105 per cent; and using a question-and-answer format to present key terms and illustrating them with explanatory icons increased understanding of T&CS by more than 30 per cent.
A lot of policy, and certainly political activity, urgEXHORTATION. es citizens or firms to do something differently. The UK’S Chief Medical Officer, for example, recently urged parents to ‘leave phones outside the bedroom at bedtime’, and enjoy ‘screen-free meal times’ with their families.
Exhortation is not a bad place to start when there are genuine concerns but also doubts about direct interventions and a desire to let people to make up their own minds. Yet in some areas we should be ready to move beyond general exhortation, and be more targeted in two ways: First, to focus on urging companies rather than consumers to change their behaviour, and second, to look to change the behaviour of specific companies as an example to industry.
Consumers have often been exhorted to be more ‘engaged’ in a range of markets and, in particular, to switch more often. In energy, insurance and banking markets, economists and some regulators have bemoaned the passivity of consumers. ‘If only more would be rational and switch, then they would get better deals and the market would work’. The implication is that ‘there is nothing wrong with our model’ and that market failures are the ‘fault’ of consumers. We now know enough to be wary of this interpretation. It’s not just that consumers may have better things to do with their time than endlessly shop around. It’s also that markets are evolving to add subtle frictions, distractions and confusions that make it hard for consumers to switch — and very easy to stick.
More of our focus needs to be placed on nudging and prodding companies to change their behaviour to make it easier for consumers to identify better deals and switch. More fundamentally, government and regulators need to be guiding the evolution of markets — and sometimes directly intervening — to make sure that good companies and practices are the ones that are winning market share, and poor companies and practices are squeezed out.
How overt should a choice be? When CHOICE ARCHITECTURE. should it be made? To what extent should choices be different, or presented differently, for different groups? These are the types of questions involved in building powerful choice architecture.
One thing regulators and innovative companies can seek to do is to put as much control as possible back into the hands of the individual. This is not the same as just encouraging people to switch a product or service. Rather, it is about allowing consumers to be able to express their preferences and to modify their online experiences in line with those preferences.
Adding-in specific and obvious user controls, particularly on quasi-monopolistic platforms, could be a tool for regulators — and a market advantage for challengers. Just as now a user can easily adjust the font size on a screen to suit their eyesight, they might also be able to — or prompted to — adjust settings that:
• control the level, type or source of advertising (or other) material they are seeing (noting the potential commercial implications of this);
• control how their data is collected and shared; or
• choose or weight the criteria by which things are ranked and presented — for example by independent retailer, female commentator, lowest total price, most reliable news source or healthiest option.
Currently, it is hard to change privacy settings and other controls buried way down in settings menus and a key challenge is how to prompt users to engage with and adjust these settings and controls. Google Chrome content settings, for example, are hidden deep down in the ‘advanced’ settings, where many users are unlikely to find them. Sometimes this is done to add deliberate friction. But sometimes it’s done for good reason — to avoid filling screens with rarely used choices or because a choice that might seem important to one user may not be for another.
There may be a role for regulators here, in encouraging or requiring companies to make user settings more prominent, either during ongoing use or in the set-up phase. However, this activity is more likely to be driven by fostering the development of new intermediaries. For example, the MIT Media Lab’s Centre for Civic Media is building a tool called Gobo to aggregate content from large platforms and then enable users to customize that content. It gives users a series of ‘sliders’ to curate what news they see and what is hidden. For example, you can express a preference to see wider news sources or more female commentators. Gobo and other intermediaries could offer more control and customization to users without regulators requiring large platforms to change their own user controls.
Another way of putting control back into the hands of users is through the use of prompts and reminders. These have been used to great effect in the offline world, to encourage people to switch their energy provider; attend, cancel or rebook their medical appointments; attend career appointments; pay their court fines; and save money or repay their credit cards. Online, they can be a powerful mechanism to elicit consumer preferences. This is especially the case if they contain a meaningful and ‘active’ choice — requirements to respond ‘yes’ or ‘no’ to a question before proceeding — offered at a salient moment.
An illustration of the power of an active choice is that researchers found that presenting consumers with an active choice on whether they would pick up their prescription from the pharmacy or have it home-delivered moved the take-up rate for home delivery from six per cent to 42 per cent. It was not that they couldn’t have chosen home-delivery before: it just was not the default option.
In closing
The time has come for industry and government to address the real and appropriate public concerns that exist around issues ranging from algorithmic bias, to disinformation, to the mental health of children and young people online. A sophisticated understanding of human behaviour — including active and constructive dialogue with the public — should be at the heart of designing successful policy solutions.
Our governments and regulators stand at the vibrant intersection between civil society and market functioning. How we respond to, and shape, the evolving character of the digital landscape is critical not just because it is pivotal to our economies, but because it is society and the human character itself that we are shaping.