‘We thought the internet would make a better world’
Aza Raskin invented the infinite scroll, now he estimates it wastes 200,000 lifetimes a day, writes Laurence Dodds in San Francisco
‘Honestly, I feel like I had to go through depression to come to terms with what technology was doing,” says Aza Raskin. He is sitting, wide-eyed, intense, in his office in a co-working space in downtown San Francisco. “Unless you’ve felt it, unless you’ve cried over the fact that we really thought we were making the world a better place with the internet …” He pauses. “We 100pc believed that.” Humanity, he says, is living through “two super old stories”. One: “Be careful what you wish for, because you’ll get it.” And two: “Creators losing control of their creations.”
Mr Raskin should know, because he is one of those Dr Frankensteins. As the son of Silicon Valley royalty (or at least nobility), he spent years merrily building products that he believed were changing the world. They did, but not in the way he imagined. Now he is at the centre of the tech backlash as one of the public faces of the Centre for Humane Technology, a non-profit group founded with former Google product manager Tristan Harris, which is steadily gathering influence in Silicon Valley.
Their cause has already notched up one partial victory. In 2013, Mr Harris spread around a presentation inside Google warning that its design practices were making people more anxious and more distracted. By 2016 he had left Google to campaign full time under the slogan “time well spent”. Just two years later, Mark Zuckerberg adopted that mantra as the new goal of Facebook’s news feed algorithms, and Google and Apple added new time management features to their smartphones. It was a start, but not quite the revolution that Mr Harris had hoped for.
So now he and Mr Raskin are preaching a new gospel, which they call “human downgrading”. At a jazz hall in San Francisco last month, they gathered an audience of celebrities (Joseph Gordon-Levitt), activists (Wael Ghonim, a computer engineer who helped kick-start the Arab Spring in Egypt) and tech luminaries (including Rob Goldman, Facebook’s vice president of advertising) and told them that the tech industry is culpable for the cultural equivalent of climate change. Addiction, distraction, disinformation, polarisation and radicalisation; all these “hurricanes”, they argued, have one common cause.
The cause is that we now spend large portions of our lives inside social networks, which are run by private companies for profit. That might be fine if their profits were aligned with our interests, but instead they are part of an “extractive attention economy” which makes money by capturing our time. That has created a “race to the bottom of the brainstem”, in which increasingly sophisticated AI tools are devoted to exploiting what Mr Raskin calls the “soft underbelly of our animal
minds”. They even propose that these AIs may be learning to make us more anxious and more confused, because these qualities make us better customers. And so, as Mr Harris put it, we are at a “civilisational moment”, just years away from “the end of human agency”.
That was not what Mr Raskin imagined when he first started fooling around with computers as a child. “I’ve always been a recreational weirdo,” he says. His father was Jef Raskin, a pioneer of computer interface design and father of the Apple MacIntosh who skirmished with Steve Jobs over whether the new machine should display bitmap images (Raskin senior was in favour; he wanted users to be able to compose music). Aza followed in his footsteps, co-founding four companies which were acquired by the likes of Google and Mozilla, usually focusing on how to make computers grant users’ wishes.
“Always there was a vision to make the world a better place,” he says. “The assumption [was that] if you want to change the world and make it better, the best way to do that is to make an app or a start-up.” One company, Massive Health, used many of the same psychological tricks employed by Snapchat and Instagram, such as “aspirational” social groups and daily login to increase exercising of users by 11pc. Slowly, however, he felt a “shadow” creeping up behind him. The realisation started to hit that these techniques are very powerful. “They’re agnostic about what kind of goal they’re pointed at,” he says. By 2017, he had completely lost faith, and was plunged into “Kierkegaardian despair – your past has been robbed of its meaning”.
One big regret is his invention of the infinite scroll (though others have also claimed credit). Once, long ago, internet users had to actually click “next” when they got to the bottom of a web page. Mr Raskin, inspired by the smooth scrolling of Google Maps, fixed all that, making the page just load new content automatically like the magic porridge pot of lore. This feature was swiftly “weaponized” to keep us endlessly refreshing our apps like gamblers desperately tugging the lever of a slot machine, and today Mr Raskin calculates, conservatively, that his invention wastes an 200,000 human lifetimes every day.
In this account, the techopalypse is a story of blind faith and perverse incentives – of cold intelligences, whether human or artificial, spinning out of control. Mr Raskin says that tech workers were wary of an “Orwellian dystopia”, in which fear rules and information is restricted, but didn’t notice they were creating a “Huxley dystopia” in which information is too abundant to be useful and pleasure keeps us tranquillised. Founders optimised their companies to make profits, companies optimised their AI to capture users’ attention, and those AIs optimised for shock, jealousy and anxiety. All along, the humans told themselves they were giving people what they wanted, whereas really they were shaping their desires.
Just look, Mr Raskin says, at the crisis of self-esteem on social media (he and Harris blame social networks for a huge rise in teenage depression, though a recent study of 12,000 British teenagers concluded that the effect is “tiny”). “If I want to keep you as a user coming back, it’s a lot of work to grab your attention every time,” he explains. “But if I could modify your identity so you do it for me, that’s way more efficient. If I could undermine your self-esteem so that you need the validation and you’re addicted to attention, that would be neat. How about if I showed you every day that people liked you better if only you looked a little different?” No wonder, he says, that 55pc of American plastic surgeons have encountered at least one patient seeking surgery so they can look better in selfies.
Or take YouTube’s recommendation engine. You might expect a nefarious human trying to keep people on YouTube as long as possible to promote content that says no other media source can be trusted. Funnily enough, that is exactly what YouTube often promotes.
Perhaps this story gives Silicon Valley too much credit. After all, Sean Parker, Facebook’s first president, tells it differently, saying that he and Mark Zuckerberg knew exactly what they were doing, and did it anyway. “I completely agree!” says Mr Raskin. “There is so much culpability, do not get me wrong.” But the industry contains many kinds of people, and what really matters is the incentives they work under. “Even if you had a different Facebook and a different YouTube, they’d still be focused on the same kinds of forces.” The human cultures that were willing to farm and slaughter animals outcompeted those that were more squeamish; so too the companies willing to “treat human beings as resources to exploit” will outcompete those that refuse.
How, then, can all of this actually be stopped? Mr Harris’s jazz club speech was mocked by some viewers for being vague on this point. He asked his audience to meditate, to be aware of their breathing, and mentioned that he would be launching a podcast. Attendees received bookmarks with confusing commandments such as “embrace our cognitive and social ergonomics”. Tom Coates, a veteran tech blogger and friend of Mr Raskin, tweeted: “I thank you for your lovely dream. But that’s all it is.”
Speaking to The Telegraph, though, Mr Raskin is more specific. He welcomes US regulators’ expected $3bn to $5bn (£2.3bn to £3.8bn) fine against Facebook for the Cambridge Analytica scandal, but says it is treating the symptom, not the cause. Instead, systemic problems require a systemic approach. One way would be to modify safe harbour rules, which protect social networks for liability for the content their users post. Instead of making them fully liable like a newspaper is, Mr Raskin suggests they should be liable for any content that they algorithmically “promote”. “That really starts to change the landscape.”
Another proposal is to give tech firms a fiduciary duty towards their users, similar to the duty of care for which The Telegraph has been campaigning. The power of AI systems, he argues, has created an inescapable asymmetry between companies and users. So, just as doctors and stockbrokers are bound to act in the best interests of their clients, on pain of being sued, so tech firms should be held to a higher standard of trust and good faith.
Most of all, Mr Raskin believes there must be cultural, even spiritual change in Silicon Valley. He has mixed feelings about the success of “time well spent”, seeing time management tools as an outsourcing of responsibility. “We’ve made this thing hyper-addictive and it’s your fault if you use it,” he jokes; it’s like cigarette companies putting a calendar on each packet that lets you check off all the days you didn’t smoke. Nevertheless, he believes it showed that tech firms can be pressured into changing their behaviour. He quotes Margaret Read’s aphorism: “Never doubt that a small group of thoughtful, committed citizens can change the world.” In his view, tech workers are that small group, and by changing their minds he can “ship a product to a billion phones without writing a single line of code”.
This focus on Valley elites has attracted strident criticism. Mr Raskin is a technologist through and through; he speaks their language and shares their world-view. Mr Harris, too, often uses engineering jargon to describe the path forward. To industry critics who have warned about these dangers for years, that is a red flag: the last thing we need, the say, is a plan to dismantle Big Tech using the tools of Big Tech. One AI expert even called the jazz club event “the most offensive” she had ever attended.
But Mr Raskin is adamant that the weapons his generation built cannot simply be banished. “It’s not that we want to use [them] for good,” he insists. “It’s that we’ve already built these levers of power. They already exist. And now the question is: do we want to put our hands on the steering wheel intentionally, or turn our backs and let the extractive attention economy drive it?” In the long run, he argues nations that let their people be divided and exploited will lose ground against nations such as China, which is building a massive system of behavioural modification based on very different values.
For now, his job is to provoke more tech workers to experience moral crises like the one he suffered. “Everyone mourns in a different way,” he says. “But I think unless people do go through some version of that …” He pauses, and sounds a little wistful. “Because we really did all believe that we were making the world a better place.” Recently, a woman from YouTube came up to him and said she didn’t know whether she could keep on working there. “People are just thankful that somebody is able to articulate what they’ve been feeling.”
‘If I want to keep you as a user coming back, it’s a lot of work but if I modify your identity so you do it for me, that’s way better’
‘Even if you had a different Facebook and a different YouTube, they’d still be focused on the same forces’
Aza Raskin wants more tech workers to experience moral crises like the one he suffered