‘We thought the in­ter­net would make a bet­ter world’

Aza Raskin in­vented the in­fi­nite scroll, now he es­ti­mates it wastes 200,000 life­times a day, writes Lau­rence Dodds in San Fran­cisco

The Daily Telegraph - Business - - Technology Intelligen­ce -

‘Hon­estly, I feel like I had to go through de­pres­sion to come to terms with what tech­nol­ogy was do­ing,” says Aza Raskin. He is sit­ting, wide-eyed, in­tense, in his of­fice in a co-work­ing space in down­town San Fran­cisco. “Un­less you’ve felt it, un­less you’ve cried over the fact that we re­ally thought we were mak­ing the world a bet­ter place with the in­ter­net …” He pauses. “We 100pc be­lieved that.” Humanity, he says, is liv­ing through “two su­per old sto­ries”. One: “Be care­ful what you wish for, be­cause you’ll get it.” And two: “Cre­ators los­ing con­trol of their cre­ations.”

Mr Raskin should know, be­cause he is one of those Dr Franken­steins. As the son of Sil­i­con Val­ley roy­alty (or at least no­bil­ity), he spent years mer­rily build­ing prod­ucts that he be­lieved were chang­ing the world. They did, but not in the way he imag­ined. Now he is at the cen­tre of the tech back­lash as one of the pub­lic faces of the Cen­tre for Hu­mane Tech­nol­ogy, a non-profit group founded with for­mer Google prod­uct man­ager Tristan Har­ris, which is steadily gath­er­ing in­flu­ence in Sil­i­con Val­ley.

Their cause has al­ready notched up one par­tial vic­tory. In 2013, Mr Har­ris spread around a pre­sen­ta­tion in­side Google warn­ing that its de­sign prac­tices were mak­ing peo­ple more anx­ious and more dis­tracted. By 2016 he had left Google to cam­paign full time un­der the slo­gan “time well spent”. Just two years later, Mark Zucker­berg adopted that mantra as the new goal of Face­book’s news feed al­go­rithms, and Google and Ap­ple added new time man­age­ment fea­tures to their smart­phones. It was a start, but not quite the revo­lu­tion that Mr Har­ris had hoped for.

So now he and Mr Raskin are preach­ing a new gospel, which they call “hu­man down­grad­ing”. At a jazz hall in San Fran­cisco last month, they gath­ered an au­di­ence of celebri­ties (Joseph Gordon-Le­vitt), ac­tivists (Wael Ghonim, a com­puter en­gi­neer who helped kick-start the Arab Spring in Egypt) and tech lu­mi­nar­ies (in­clud­ing Rob Gold­man, Face­book’s vice pres­i­dent of ad­ver­tis­ing) and told them that the tech in­dus­try is cul­pa­ble for the cul­tural equiv­a­lent of cli­mate change. Ad­dic­tion, dis­trac­tion, dis­in­for­ma­tion, po­lar­i­sa­tion and rad­i­cal­i­sa­tion; all these “hur­ri­canes”, they ar­gued, have one com­mon cause.

The cause is that we now spend large por­tions of our lives in­side so­cial net­works, which are run by pri­vate com­pa­nies for profit. That might be fine if their prof­its were aligned with our in­ter­ests, but in­stead they are part of an “ex­trac­tive at­ten­tion econ­omy” which makes money by cap­tur­ing our time. That has cre­ated a “race to the bot­tom of the brain­stem”, in which in­creas­ingly so­phis­ti­cated AI tools are de­voted to ex­ploit­ing what Mr Raskin calls the “soft un­der­belly of our an­i­mal

minds”. They even pro­pose that these AIs may be learn­ing to make us more anx­ious and more con­fused, be­cause these qual­i­ties make us bet­ter cus­tomers. And so, as Mr Har­ris put it, we are at a “civilisati­onal mo­ment”, just years away from “the end of hu­man agency”.

That was not what Mr Raskin imag­ined when he first started fool­ing around with com­put­ers as a child. “I’ve al­ways been a re­cre­ational weirdo,” he says. His fa­ther was Jef Raskin, a pioneer of com­puter in­ter­face de­sign and fa­ther of the Ap­ple MacIntosh who skir­mished with Steve Jobs over whether the new ma­chine should dis­play bit­map images (Raskin se­nior was in favour; he wanted users to be able to com­pose mu­sic). Aza fol­lowed in his foot­steps, co-found­ing four com­pa­nies which were ac­quired by the likes of Google and Mozilla, usu­ally fo­cus­ing on how to make com­put­ers grant users’ wishes.

“Al­ways there was a vi­sion to make the world a bet­ter place,” he says. “The as­sump­tion [was that] if you want to change the world and make it bet­ter, the best way to do that is to make an app or a start-up.” One com­pany, Mas­sive Health, used many of the same psy­cho­log­i­cal tricks em­ployed by Snapchat and In­sta­gram, such as “as­pi­ra­tional” so­cial groups and daily lo­gin to in­crease ex­er­cis­ing of users by 11pc. Slowly, how­ever, he felt a “shadow” creep­ing up be­hind him. The re­al­i­sa­tion started to hit that these tech­niques are very pow­er­ful. “They’re ag­nos­tic about what kind of goal they’re pointed at,” he says. By 2017, he had com­pletely lost faith, and was plunged into “Kierkegaar­dian de­spair – your past has been robbed of its mean­ing”.

One big re­gret is his in­ven­tion of the in­fi­nite scroll (though others have also claimed credit). Once, long ago, in­ter­net users had to ac­tu­ally click “next” when they got to the bot­tom of a web page. Mr Raskin, in­spired by the smooth scrolling of Google Maps, fixed all that, mak­ing the page just load new con­tent au­to­mat­i­cally like the magic por­ridge pot of lore. This fea­ture was swiftly “weaponized” to keep us end­lessly re­fresh­ing our apps like gam­blers des­per­ately tug­ging the lever of a slot ma­chine, and to­day Mr Raskin cal­cu­lates, con­ser­va­tively, that his in­ven­tion wastes an 200,000 hu­man life­times ev­ery day.

In this ac­count, the techopa­lypse is a story of blind faith and per­verse in­cen­tives – of cold in­tel­li­gences, whether hu­man or ar­ti­fi­cial, spin­ning out of con­trol. Mr Raskin says that tech work­ers were wary of an “Or­wellian dystopia”, in which fear rules and in­for­ma­tion is re­stricted, but didn’t no­tice they were cre­at­ing a “Hux­ley dystopia” in which in­for­ma­tion is too abun­dant to be use­ful and plea­sure keeps us tran­quil­lised. Founders op­ti­mised their com­pa­nies to make prof­its, com­pa­nies op­ti­mised their AI to cap­ture users’ at­ten­tion, and those AIs op­ti­mised for shock, jeal­ousy and anx­i­ety. All along, the hu­mans told them­selves they were giv­ing peo­ple what they wanted, whereas re­ally they were shap­ing their de­sires.

Just look, Mr Raskin says, at the cri­sis of self-es­teem on so­cial me­dia (he and Har­ris blame so­cial net­works for a huge rise in teenage de­pres­sion, though a re­cent study of 12,000 Bri­tish teenagers con­cluded that the ef­fect is “tiny”). “If I want to keep you as a user com­ing back, it’s a lot of work to grab your at­ten­tion ev­ery time,” he ex­plains. “But if I could mod­ify your iden­tity so you do it for me, that’s way more ef­fi­cient. If I could un­der­mine your self-es­teem so that you need the val­i­da­tion and you’re ad­dicted to at­ten­tion, that would be neat. How about if I showed you ev­ery day that peo­ple liked you bet­ter if only you looked a lit­tle dif­fer­ent?” No won­der, he says, that 55pc of Amer­i­can plas­tic sur­geons have en­coun­tered at least one pa­tient seek­ing surgery so they can look bet­ter in self­ies.

Or take YouTube’s rec­om­men­da­tion en­gine. You might ex­pect a ne­far­i­ous hu­man try­ing to keep peo­ple on YouTube as long as pos­si­ble to pro­mote con­tent that says no other me­dia source can be trusted. Fun­nily enough, that is ex­actly what YouTube of­ten pro­motes.

Per­haps this story gives Sil­i­con Val­ley too much credit. After all, Sean Parker, Face­book’s first pres­i­dent, tells it dif­fer­ently, say­ing that he and Mark Zucker­berg knew ex­actly what they were do­ing, and did it any­way. “I com­pletely agree!” says Mr Raskin. “There is so much cul­pa­bil­ity, do not get me wrong.” But the in­dus­try con­tains many kinds of peo­ple, and what re­ally mat­ters is the in­cen­tives they work un­der. “Even if you had a dif­fer­ent Face­book and a dif­fer­ent YouTube, they’d still be fo­cused on the same kinds of forces.” The hu­man cul­tures that were will­ing to farm and slaugh­ter an­i­mals out­com­peted those that were more squea­mish; so too the com­pa­nies will­ing to “treat hu­man be­ings as re­sources to ex­ploit” will out­com­pete those that refuse.

How, then, can all of this ac­tu­ally be stopped? Mr Har­ris’s jazz club speech was mocked by some view­ers for be­ing vague on this point. He asked his au­di­ence to meditate, to be aware of their breath­ing, and men­tioned that he would be launch­ing a pod­cast. At­ten­dees re­ceived book­marks with con­fus­ing com­mand­ments such as “em­brace our cog­ni­tive and so­cial er­gonomics”. Tom Coates, a veteran tech blog­ger and friend of Mr Raskin, tweeted: “I thank you for your lovely dream. But that’s all it is.”

Speak­ing to The Tele­graph, though, Mr Raskin is more spe­cific. He wel­comes US reg­u­la­tors’ ex­pected $3bn to $5bn (£2.3bn to £3.8bn) fine against Face­book for the Cam­bridge An­a­lyt­ica scan­dal, but says it is treat­ing the symp­tom, not the cause. In­stead, sys­temic prob­lems re­quire a sys­temic ap­proach. One way would be to mod­ify safe har­bour rules, which pro­tect so­cial net­works for li­a­bil­ity for the con­tent their users post. In­stead of mak­ing them fully li­able like a news­pa­per is, Mr Raskin sug­gests they should be li­able for any con­tent that they al­go­rith­mi­cally “pro­mote”. “That re­ally starts to change the land­scape.”

An­other pro­posal is to give tech firms a fidu­ciary duty to­wards their users, sim­i­lar to the duty of care for which The Tele­graph has been cam­paign­ing. The power of AI sys­tems, he ar­gues, has cre­ated an in­escapable asym­me­try be­tween com­pa­nies and users. So, just as doc­tors and stock­bro­kers are bound to act in the best in­ter­ests of their clients, on pain of be­ing sued, so tech firms should be held to a higher stan­dard of trust and good faith.

Most of all, Mr Raskin be­lieves there must be cul­tural, even spir­i­tual change in Sil­i­con Val­ley. He has mixed feel­ings about the suc­cess of “time well spent”, see­ing time man­age­ment tools as an out­sourc­ing of re­spon­si­bil­ity. “We’ve made this thing hy­per-ad­dic­tive and it’s your fault if you use it,” he jokes; it’s like cig­a­rette com­pa­nies putting a cal­en­dar on each packet that lets you check off all the days you didn’t smoke. Nev­er­the­less, he be­lieves it showed that tech firms can be pres­sured into chang­ing their be­hav­iour. He quotes Mar­garet Read’s apho­rism: “Never doubt that a small group of thought­ful, com­mit­ted cit­i­zens can change the world.” In his view, tech work­ers are that small group, and by chang­ing their minds he can “ship a prod­uct to a bil­lion phones with­out writ­ing a sin­gle line of code”.

This fo­cus on Val­ley elites has at­tracted stri­dent crit­i­cism. Mr Raskin is a tech­nol­o­gist through and through; he speaks their lan­guage and shares their world-view. Mr Har­ris, too, of­ten uses engineerin­g jar­gon to de­scribe the path for­ward. To in­dus­try crit­ics who have warned about these dan­gers for years, that is a red flag: the last thing we need, the say, is a plan to dis­man­tle Big Tech us­ing the tools of Big Tech. One AI ex­pert even called the jazz club event “the most of­fen­sive” she had ever at­tended.

But Mr Raskin is adamant that the weapons his gen­er­a­tion built can­not sim­ply be ban­ished. “It’s not that we want to use [them] for good,” he in­sists. “It’s that we’ve al­ready built these levers of power. They al­ready ex­ist. And now the ques­tion is: do we want to put our hands on the steer­ing wheel in­ten­tion­ally, or turn our backs and let the ex­trac­tive at­ten­tion econ­omy drive it?” In the long run, he ar­gues na­tions that let their peo­ple be di­vided and ex­ploited will lose ground against na­tions such as China, which is build­ing a mas­sive sys­tem of be­havioural mod­i­fi­ca­tion based on very dif­fer­ent val­ues.

For now, his job is to pro­voke more tech work­ers to ex­pe­ri­ence moral crises like the one he suf­fered. “Ev­ery­one mourns in a dif­fer­ent way,” he says. “But I think un­less peo­ple do go through some ver­sion of that …” He pauses, and sounds a lit­tle wist­ful. “Be­cause we re­ally did all be­lieve that we were mak­ing the world a bet­ter place.” Re­cently, a woman from YouTube came up to him and said she didn’t know whether she could keep on work­ing there. “Peo­ple are just thank­ful that some­body is able to ar­tic­u­late what they’ve been feel­ing.”

‘If I want to keep you as a user com­ing back, it’s a lot of work but if I mod­ify your iden­tity so you do it for me, that’s way bet­ter’

‘Even if you had a dif­fer­ent Face­book and a dif­fer­ent YouTube, they’d still be fo­cused on the same forces’

Aza Raskin wants more tech work­ers to ex­pe­ri­ence moral crises like the one he suf­fered

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.