'Our minds can be hijacked'
The tech insiders who fear a smartphone dystopia
choices will steer what a billion people are thinking today,” he said. “I don’t know a more urgent problem than this,” Harris says.
It all began in 2013, when he was working as a product manager at Google, and circulated a thought-provoking memo to close colleagues. It struck a chord and spread.
He explored how LinkedIn exploits a need for social reciprocity to widen its network; how YouTube and Netflix autoplay videos and next episodes, depriving users of a choice about whether or not they want to keep watching; how Snapchat created its addictive Snapstreaks feature, encouraging constant communication between users.
The techniques these companies use are not always generic: they can be algorithmically tailored to each person. An internal Facebook report leaked this year, for example, revealed that the company can identify when teens feel “insecure”, “worthless” and “need a confidence boost”. Such granular information, Harris adds, is “a perfect model of what buttons you can push in a person”. Tech companies can exploit such vulnerabilities to keep people hooked; manipulating, for example, when people receive “likes” for their posts, ensuring they arrive when an individual is likely to feel vulnerable, or in need of approval, or maybe just bored. And the very same techniques can be sold to the highest bidder.
Harris believes that tech companies never deliberately set out to make their products addictive. They were responding to the incentives of an advertising economy, experimenting with techniques that might capture people’s attention, even stumbling across highly effective design by accident.
The most seductive design, Harris explains, exploits the same psychological susceptibility that makes gambling so compulsive: variable rewards. It is the possibility of disappointment that makes it compulsive.
It’s this that explains how the pull-torefresh mechanism, whereby users swipe down, pause and wait to see what content appears, rapidly became one of the most addictive design features in modern technology. “Each time you’re swiping down, it’s like a slot machine,” Harris says.
James Williams does not believe talk of dystopia is far-fetched. The ex-Google strategist who built the metrics system for the company’s global search advertising business, has had a frontrow view of an industry he describes as the “largest, most standardised and most centralised form of attentional control in human history”. He embarked on years of independent research. He saw the Google memo circulated by Harris and the pair became allies.
The same forces that led tech firms to hook users with design tricks also encourage those companies to depict the world in a way that makes for compulsive, irresistible viewing. “The attention economy incentivises the design of technologies that grab our attention,” he says. “It privileges our impulses over our intentions.”
That means privileging what is sensational over what is nuanced, appealing to emotion, anger and outrage. The news media is increasingly working in service to tech companies, and must play by the rules of the attention economy to “sensationalise, bait and entertain in order to survive”. In the wake of Donald Trump’s stunning electoral victory, many were quick to question the role of so- called “fake news” on Facebook, Russian- created Twitter bots or the data- centric targeting efforts that companies such as Cambridge Analytica used to sway voters. But Williams sees those factors as symptoms of a deeper problem. The attention economy itself is setup to promote a phenomenon like Trump, who is masterful at grabbing and retaining attention often by exploiting or creating outrage.
All of which, Williams says, is not only distorting the way we view politics but, over time, may be changing the way we think, making us less rational and more impulsive. “We’ve habituated ourselves into a perpetual cognitive style of outrage,” he says. (Courtesy The Guardian, UK)