Rotman Management Magazine

The Behavioura­l Science of Online Manipulati­on

Organizati­ons have two choices: exploit human biases or ‘nudge for good’. The latter will shape the evolution of the digital world — and build fairer, better markets.

- By Elisabeth Costa and David Halpern

that the Internet has transforme­d the way NO ONE WOULD ARGUE we live, work and play. Today you can make ‘friends’ around the world without ever saying hello; compare products from dozens of shops without leaving the house; or plan a date with a stranger without breaking a sweat. But while the benefits of life online are significan­t, so too are the economic and social costs.

By now, most readers are familiar with the term ‘nudge’; but there is a darker side to the evolution of online and digital markets: ‘Sludge’ and other techniques seek to harness our behavioura­l biases against us rather than for us.

Nobel Laureate in Economics Richard Thaler, a longstandi­ng adviser to the Behavioura­l Insights Team, often signs his books with the maxim ‘Nudge for good’. In this article we will explore some of the ways in which organizati­ons might seek to enact this sentiment in the online world to shape its evolution — and build fairer, better online markets.

Exploiting Consumer Biases

In the same way that our behaviour in the offline world is sensitive to subtle cues from the environmen­t, our behaviour online is shaped by the design and characteri­stics of the websites, platforms and apps we interact with. Nobel Laureates Robert Shiller, George Akerlof and Thaler have written about how companies seek to manipulate and exploit our cognitive biases and psychologi­cal weaknesses. Two types of manipulati­on identified by them are particular­ly common: The deliberate exploitati­on of informatio­n deficits and behavioura­l biases (‘phishing’); and the deliberate addition of frictions and hassles to make it harder to make good choices (‘sludge’). Let’s take a closer look at each.

Companies have long used techniques to target and PHISHING. exploit our psychologi­cal weaknesses. Shiller and Akerlof call this phishing. Offering a cash-back deal is a classic example. The vast majority of us erroneousl­y predict that we will go to the effort of actually claiming the money. Our purchase decisions are more sensitive to the promise of the cash than to the effort required in actually claiming it, while redeeming the cash-back offer is driven more by the effort involved than the cash available. Sellers can experiment to find the optimum level of cash-back that will tempt you, but not be so large that you will actually complete the claim.

Today, a shopper may subscribe to Amazon Prime, believing that they are saving money via free shipping — when in reality they are just being tempted to buy more. In fact, the longer someone is a Prime member, the more they spend on the platform. In the past, we had no way of knowing whether we were fast enough to be in the ‘first 50 responders’ to claim a free gift or discount. But today we are likely to trust real-time updates, for example, from hotel-booking sites urging that there is ‘only

one room left at this price’ or ‘five other people are looking at this room right now’, especially if they have already shown you all the sold-out hotels in your initial search results. This is designed to harness our sense of scarcity and our desire to see ‘social proof ’ that other people are making choices similar to ours. What they don’t tell you is that those five people aren’t necessaril­y looking at the same room as you.

The intersecti­on of data, imbalances of informatio­n and intelligen­t marketing also opens up new opportunit­ies to exploit our biases. Conspiracy theorists might claim that our devices are ‘listening’ to our conversati­ons, when in fact the data we willingly share is more than enough to predict what we might buy, when, how much we are willing to pay and the flawed decisions we might make along the way. Further, the data we share online also allows companies to move away from the model of active, consumer-led searches and towards prompting us with targeted advertisin­g and informatio­n at times when we are most vulnerable to being manipulate­d. For example, the rise of ‘femtech’ apps to track menstrual cycles and fertility has allowed businesses to collect large and robust data sets to produce tailored and timely marketing. More disturbing­ly, social media platforms’ ability to identify users’ emotional states based on their activity means that — in the alleged words of Facebook staff — advertiser­s now have the capacity to identify when teenagers feel insecure or need a ‘confidence boost’.

The conscienti­ous nudger seeks to design systems and SLUDGE. processes to help people make better choices for themselves — for instance, by using defaults that make it easier to save, rank deals based on best value and tell us what other people like us are doing. Thaler coined the term ‘sludge’ to describe the deliberate use of these same tactics to thwart us acting in our best interests.

There is plenty of sludge offline — notably in administra­tive paperwork burdens — but online environmen­ts allow sludge to thrive. You can set up an online news subscripti­on with one click, yet when you go to cancel it, you’ll find yourself in a complicate­d loop of online and offline hurdles: Calling a phone number during restricted business hours, filling out a detailed form and posting it to the company, or even being required to visit a physical store. These muddy frictions are deliberate. Companies know that they matter.

Online airline bookings are also fertile ground for sludge. An enticingly-low flight cost and large ‘book now’ button is often followed by a series of pages slowly adding on additional choices, informatio­n and costs. When you get to reserving your seat, you discover that you can pay online or if you want to be randomly allocated a free seat, you’ll need to queue at the check-in desk on the day of the flight. All designed to sell you more and discourage you from choosing free or cheaper options.

Of course, friction can also be dialled-down or removed from processes to encourage behaviour that is self-defeating. You can be approved for a high-cost payday loan with just a few clicks, and shopping or gambling online at 3 a.m. is easy to start and difficult to stop. The practice on Netflix and other streaming platforms to automatica­lly start the next episode in a series is a clear example of the power of removing friction, or flipping the default: People fail to press stop, and thereby watch another episode they might not have watched if they had to press play.

Morals, Ethics and Social Networks

It is undeniable that digital markets are changing the nature of the economy. Yet as Adam Smith exposed, there is more to a market than money. Markets are entwined with ‘moral sentiments’, social networks and a myriad of habits and customs that shape our lives and societies.

‘Social capital’ refers to our connection­s to others and the level of trust, informal rules and common understand­ings that facilitate communicat­ion and exchange within those networks. It helps informatio­n flow, lowers transactio­n costs and drives fairer exchange. Online markets — and social media in particular — have the potential to significan­tly affect the character and form of our social capital — for example, by creating ‘echo chambers’ that fool us into thinking our social and political ‘bubbles’ are representa­tive.

Nudge co-author Cass Sunstein has written about a phenomenon that exacerbate­s these echo chambers: ‘asymmetric­al updating’. This is a strong tendency to favour evidence that confirms our beliefs and to ignore or misread new evidence that does not. This tendency has been observed in beliefs about topics such as climate change, the death penalty, affirmativ­e action, and sexism in male-dominated subjects and industries.

Social media helps asymmetric­al updating thrive, since many now use it as a primary news source without noticing how their choices about who to follow have fundamenta­lly altered the balance of informatio­n they receive. Website algorithms curate what we see and prompt us where to go next based on our past

The intersecti­on of data, imbalances of informatio­n and intelligen­t marketing opens up new opportunit­ies to exploit our biases.

usage. The powerful logic of ‘people who liked this also liked that’ takes us ever deeper into the bubble.

Algorithms can also create or destroy social capital more directly by drawing on personal data to exclude people from opportunit­ies. For example, a babysittin­g app that uses Facebook news feeds to rate the suitabilit­y of babysitter­s on the basis of whether they’ve discussed drugs or seem to have a ‘bad attitude’ online.

Studies have already highlighte­d examples where online transactio­ns and interactio­ns are leading to new forms of discrimina­tion and disadvanta­ge. While platforms like ebay have persisted with pseudonyms, others have encouraged users to provide informatio­n about themselves to build trust and, over time, reputation. For example, Airbnb requires real names and it has been shown that guests with distinctiv­ely African American names are 16 per cent less likely to be accepted compared to identical guests with distinctiv­ely white names; and male guests in implied same-sex relationsh­ips were up to 30 per cent less likely than other couples to have their bookings accepted by Airbnb hosts. The discrimina­tion was not in the form of outright rejections but more insidious, with hosts failing to respond at all.

But the impacts on social capital and cohesion are not all negative. Digital markets have also been able to supplement and enhance society’s stock of social capital. Platforms such as ebay have made it possible to trust relative strangers; Linkedin can extend the ‘weak ties’ that play a key role in getting a job; Facebook has made it easy to stay in touch with old friends; and sites like Tripadviso­r create a much wider network than word-ofmouth recommenda­tions.

A related concern is how the evolution of online environmen­ts is affecting the character of our social relationsh­ips and communicat­ion with others. Most obviously, anonymized communicat­ion reduces the reputation­al costs usually associated with negative behaviour and exchanges. This can lead to more aggression (such as anonymized subjects administer­ing more pain on others) and dishonesty, though it can also leave more room for self-expression and the ability (for some) to resist problemati­c social norms. There is also some evidence that the characteri­stics that people adopt in online avatars ‘leak’ into their offline behaviour, for good or bad.

A number of social media sites and online marketplac­es are wrestling with this issue, and hate-speech laws pick up on some of it. But the majority of the time online incivility occupies a grey area between the law and public acceptabil­ity of hurtful comments, inaccurate reviews, bullying and shaming. A survey commission­ed by Amnesty Internatio­nal in Australia found that nearly half of women aged 18 to 24 had experience­d online harassment, including sexist abuse, ‘trolling’ (posting deliberate­ly offensive or provocativ­e content), threats of sexual violence, the posting of intimate pictures online without consent and ‘doxxing’ (the sharing of identifiab­le details). It is clear that a selfregula­tory dynamic is not enough.

One of the knock-on consequenc­es of all of this is widely thought — though not yet proven — to be on mental health. New evidence suggests two potential factors at play: negative feelings due to hurtful interactio­ns or negative content, and substituti­ng time away from activities that enhance well-being, such as offline socializin­g.

In one recent large-scale field experiment, researcher­s found that Facebook users who deactivate­d their accounts for four weeks spent less time online, reduced political polarizati­on (although at the expense of factual news knowledge) and, most crucially, increased subjective well-being. This increase was ‘small but significan­t’, in particular on self-reported happiness, life satisfacti­on, depression and anxiety, and the size of the effect on overall subjective wellbeing was equivalent to about 25 to 40 per cent of the effect of interventi­ons such as therapy.

Of particular concern is the impact of social media on the mental health of younger people. A recent survey of people aged 14 to 25 conducted by the Royal Society of Public Health comparing the five most popular social media platforms on a range of positive and negative mental health outcomes, found that Youtube was the most positive, for example by raising awareness and self-expression, while Instagram was the most negative — via its impact on body image and fear of missing out.

These findings are not trivial. Young adults are currently estimated to be spending an average of four hours online every day, and this shift has coincided remarkably with a rise in mental health conditions in teenage girls, including depression, anxiety disorders and self-harm. While each individual case of mental illness is complex, there is increasing evidence that social media is at the very least associated with these issues. One study analyzed survey data from 10,904 14-year-olds in the UK to explore the relationsh­ip between social media use and depressive symptoms. It found, in particular for girls, that greater social media use was associated with online harassment, poor

sleep, low self-esteem and poor body image, and that these were in turn linked to depressive symptoms.

What Can, And Should, We Do?

An effective response to these challenges must make good use of the entire regulatory toolkit and also build on it to introduce new ways of shaping markets. And not just government and regulators: industry has a key role to play. Following are a few principles to consider.

More focus needs to be placed on nudging and prodding companies to change their behaviour.

There are many opportunit­ies to improve SMARTER DISCLOSURE­S. the informatio­n being provided to consumers online — from how their data is being collected, to whether algorithms are being used to make decisions, to a website’s terms and conditions (T&CS) — and to vary when and how this informatio­n is presented. Where there is no clear regulatory power to compel companies to provide informatio­n in a particular format, these smarter disclosure­s can be encouraged through best practice guides, codes of conduct and reporting requiremen­ts.

Our colleagues recently concluded a series of online experiment­s testing ways of applying behavioura­l science to improve consumer comprehens­ion of (and engagement with) online T&CS and privacy policies. They found that telling customers how long a privacy policy normally takes to read increased openrates by 105 per cent; and using a question-and-answer format to present key terms and illustrati­ng them with explanator­y icons increased understand­ing of T&CS by more than 30 per cent.

A lot of policy, and certainly political activity, urgEXHORTA­TION. es citizens or firms to do something differentl­y. The UK’S Chief Medical Officer, for example, recently urged parents to ‘leave phones outside the bedroom at bedtime’, and enjoy ‘screen-free meal times’ with their families.

Exhortatio­n is not a bad place to start when there are genuine concerns but also doubts about direct interventi­ons and a desire to let people to make up their own minds. Yet in some areas we should be ready to move beyond general exhortatio­n, and be more targeted in two ways: First, to focus on urging companies rather than consumers to change their behaviour, and second, to look to change the behaviour of specific companies as an example to industry.

Consumers have often been exhorted to be more ‘engaged’ in a range of markets and, in particular, to switch more often. In energy, insurance and banking markets, economists and some regulators have bemoaned the passivity of consumers. ‘If only more would be rational and switch, then they would get better deals and the market would work’. The implicatio­n is that ‘there is nothing wrong with our model’ and that market failures are the ‘fault’ of consumers. We now know enough to be wary of this interpreta­tion. It’s not just that consumers may have better things to do with their time than endlessly shop around. It’s also that markets are evolving to add subtle frictions, distractio­ns and confusions that make it hard for consumers to switch — and very easy to stick.

More of our focus needs to be placed on nudging and prodding companies to change their behaviour to make it easier for consumers to identify better deals and switch. More fundamenta­lly, government and regulators need to be guiding the evolution of markets — and sometimes directly intervenin­g — to make sure that good companies and practices are the ones that are winning market share, and poor companies and practices are squeezed out.

How overt should a choice be? When CHOICE ARCHITECTU­RE. should it be made? To what extent should choices be different, or presented differentl­y, for different groups? These are the types of questions involved in building powerful choice architectu­re.

One thing regulators and innovative companies can seek to do is to put as much control as possible back into the hands of the individual. This is not the same as just encouragin­g people to switch a product or service. Rather, it is about allowing consumers to be able to express their preference­s and to modify their online experience­s in line with those preference­s.

Adding-in specific and obvious user controls, particular­ly on quasi-monopolist­ic platforms, could be a tool for regulators — and a market advantage for challenger­s. Just as now a user can easily adjust the font size on a screen to suit their eyesight, they might also be able to — or prompted to — adjust settings that:

• control the level, type or source of advertisin­g (or other) material they are seeing (noting the potential commercial implicatio­ns of this);

• control how their data is collected and shared; or

• choose or weight the criteria by which things are ranked and presented — for example by independen­t retailer, female commentato­r, lowest total price, most reliable news source or healthiest option.

Currently, it is hard to change privacy settings and other controls buried way down in settings menus and a key challenge is how to prompt users to engage with and adjust these settings and controls. Google Chrome content settings, for example, are hidden deep down in the ‘advanced’ settings, where many users are unlikely to find them. Sometimes this is done to add deliberate friction. But sometimes it’s done for good reason — to avoid filling screens with rarely used choices or because a choice that might seem important to one user may not be for another.

There may be a role for regulators here, in encouragin­g or requiring companies to make user settings more prominent, either during ongoing use or in the set-up phase. However, this activity is more likely to be driven by fostering the developmen­t of new intermedia­ries. For example, the MIT Media Lab’s Centre for Civic Media is building a tool called Gobo to aggregate content from large platforms and then enable users to customize that content. It gives users a series of ‘sliders’ to curate what news they see and what is hidden. For example, you can express a preference to see wider news sources or more female commentato­rs. Gobo and other intermedia­ries could offer more control and customizat­ion to users without regulators requiring large platforms to change their own user controls.

Another way of putting control back into the hands of users is through the use of prompts and reminders. These have been used to great effect in the offline world, to encourage people to switch their energy provider; attend, cancel or rebook their medical appointmen­ts; attend career appointmen­ts; pay their court fines; and save money or repay their credit cards. Online, they can be a powerful mechanism to elicit consumer preference­s. This is especially the case if they contain a meaningful and ‘active’ choice — requiremen­ts to respond ‘yes’ or ‘no’ to a question before proceeding — offered at a salient moment.

An illustrati­on of the power of an active choice is that researcher­s found that presenting consumers with an active choice on whether they would pick up their prescripti­on from the pharmacy or have it home-delivered moved the take-up rate for home delivery from six per cent to 42 per cent. It was not that they couldn’t have chosen home-delivery before: it just was not the default option.

In closing

The time has come for industry and government to address the real and appropriat­e public concerns that exist around issues ranging from algorithmi­c bias, to disinforma­tion, to the mental health of children and young people online. A sophistica­ted understand­ing of human behaviour — including active and constructi­ve dialogue with the public — should be at the heart of designing successful policy solutions.

Our government­s and regulators stand at the vibrant intersecti­on between civil society and market functionin­g. How we respond to, and shape, the evolving character of the digital landscape is critical not just because it is pivotal to our economies, but because it is society and the human character itself that we are shaping.

 ??  ?? Elisabeth Costa is a Senior Director at the Uk-based Behavioura­l Insights Team, where she leads a portfolio of teams working on economic policy. David Halpern is Chief Executive of the Behavioura­l Insights Team, which he has led since its inception in 2010. He is the author of Inside the Nudge Unit: How Small Changes Can Make a Big Difference (Ebury Press, 2016).
Elisabeth Costa is a Senior Director at the Uk-based Behavioura­l Insights Team, where she leads a portfolio of teams working on economic policy. David Halpern is Chief Executive of the Behavioura­l Insights Team, which he has led since its inception in 2010. He is the author of Inside the Nudge Unit: How Small Changes Can Make a Big Difference (Ebury Press, 2016).
 ??  ??

Newspapers in English

Newspapers from Canada