“There are very few ex­am­ples where peo­ple be­come prod­ucts – slav­ery, the sex trade and now so­cial me­dia.”

CHRISTO­PHER WYLIE,

Campaign Middle East - - FRONT PAGE - Talk­ing to Kate Magee

who blew the whis­tle on Cam­bridge An­a­lyt­ica, talks data and cre­ativ­ity.

In the early hours of 17 March 2018, the 28-year-old Christo­pher Wylie tweeted: “Here we go….”

Later that day, The Ob­server pub­lished the story of Cam­bridge An­a­lyt­ica’s mis­use of Face­book data, which sent shock­waves around the world, caused mil­lions to #DeleteFace­book and led the UK In­for­ma­tion Com­mis­sioner’s Of­fice to fine the site the max­i­mum penalty for fail­ing to pro­tect users’ in­for­ma­tion. Six weeks af­ter the story broke, Cam­bridge An­a­lyt­ica closed.

Wylie was the key source in the year-long in­ves­ti­ga­tion. In the months fol­low­ing pub­li­ca­tion, he has been var­i­ously de­scribed as “the mil­len­ni­als’ first great whistle­blower”, a “fan­ta­sist char­la­tan” and, as he calls him­self, the “Cana­dian ve­gan” who was re­spon­si­ble for cre­at­ing a “psy­cho­log­i­cal war­fare tool”.

Now, as at­ten­tion has shifted to this month’s US midterm elec­tions as a test of mean­ing­ful change at so­cial-me­dia com­pa­nies, the bright-orange-haired Wylie is sit­ting un­der Cam­paign’s lens. He talks about his Face­book ban, the need for reg­u­la­tion and his love of the John Lewis ads: “The cre­ative is just bril­liant. Any time I see those ads I think John Lewis should run [the UK]!”

He is ar­tic­u­late, pas­sion­ate, style-con­scious and, per­haps sur­pris­ingly for some­one who is a data sci­en­tist, he is a huge ad­vo­cate for hu­man cre­ativ­ity. “I don’t be­lieve in data-driven any­thing, it’s the most stupid phrase. Data should al­ways serve peo­ple, peo­ple should never serve data,” he says.

He be­lieves that poor use of data is killing good ideas. And that, un­less ef­fec­tive reg­u­la­tion is en­acted, so­ci­ety’s wor­ship of al­go­rithms,

unchecked data cap­ture and use, and the likely spread of AI to all parts of our lives is caus­ing us to sleep­walk into a bleak fu­ture.

Not only are such cir­cum­stances a threat to ad­land – why do you need an ad to tell you about a prod­uct if an al­go­rithm is choos­ing it for you? – it is a threat to hu­man free will. “Cur­rently, the only moral­ity of the al­go­rithm is to op­ti­mise you as a con­sumer and, in many cases, you be­come the prod­uct. There are very few ex­am­ples in hu­man his­tory of in­dus­tries where peo­ple them­selves be­come prod­ucts and those are scary in­dus­tries – slav­ery and the sex trade. And now, we have so­cial me­dia,” Wylie says.

“The prob­lem with that, and what makes it in­her­ently dif­fer­ent to sell­ing, say, tooth­paste, is that you’re sell­ing parts of peo­ple or ac­cess to peo­ple. Peo­ple have an in­nate moral worth. If we don’t re­spect that, we can cre­ate in­dus­tries that do ter­ri­ble things to peo­ple. We are [head­ing] blindly and quickly into an en­vi­ron­ment where this men­tal­ity is go­ing to be am­pli­fied through AI ev­ery­where. We’re hu­mans, we should be think­ing about peo­ple first.”

His words carry weight, be­cause he’s been on the dark side. He has seen what can hap­pen when data is used to spread mis­in­for­ma­tion, cre­ate in­sur­gen­cies and prey on the worst of peo­ple’s char­ac­ters.

The po­lit­i­cal bat­tle­field

A quick re­fresher on the scan­dal, in Wylie’s words: Cam­bridge An­a­lyt­ica was a com­pany spun out of SCL Group, a British mil­i­tary con­trac­tor that worked in in­for­ma­tion op­er­a­tions for armed forces around the world. It was con­duct­ing re­search on how to scale and digi­tise in­for­ma­tion war­fare – the use of in­for­ma­tion to con­fuse or de­grade the ef­fi­cacy of an en­emy.

Wylie was a 24-year-old fash­ion-trend-fore­cast­ing stu­dent who also worked with the Lib­eral Democrats on its tar­get­ing. A con­tact in­tro­duced him to SCL.

As di­rec­tor of re­search, Wylie’s orig­i­nal role was to map out how the com­pany would take tra­di­tional in­for­ma­tion op­er­a­tions tac­tics into the on­line space – in par­tic­u­lar, by pro­fil­ing peo­ple who would be sus­cep­ti­ble to cer­tain mes­sag­ing.

This mor­phed into the po­lit­i­cal arena. Af­ter Wylie left, the com­pany worked on Don­ald Trump’s US pres­i­den­tial cam­paign and – pos­si­bly – the UK’s Euro­pean Union ref­er­en­dum. In Fe­bru­ary 2016, Cam­bridge An­a­lyt­ica’s for­mer chief ex­ec­u­tive, Alexan­der Nix, wrote in Cam­paign that his com­pany had “al­ready helped su­per­charge Leave.EU’s so­cial-me­dia cam­paign”. Nix has stren­u­ously de­nied this since, in­clud­ing to MPs.

It was this shift from the bat­tle­field to pol­i­tics that made Wylie un­com­fort­able. “When you are work­ing in in­for­ma­tion op­er­a­tions projects, where your tar­get is a com­bat­ant, the au­ton­omy or agency of your tar­gets is not your pri­mary con­sid­er­a­tion. It is fair game to deny and ma­nip­u­late in­for­ma­tion, co­erce and ex­ploit any men­tal vul­ner­a­bil­i­ties a per­son has, and to bring out the very worst char­ac­ter­is­tics in that per­son, be­cause they are an en­emy,” he says.

“But if you port that over to a demo­cratic sys­tem, if you run cam­paigns de­signed to un­der­mine peo­ple’s abil­ity to make free choices and to un­der­stand what is real and not real, you are un­der­min­ing democ­racy and treat­ing vot­ers in the same way as you are treat­ing ter­ror­ists.”

One of the rea­sons th­ese tech­niques are so in­sid­i­ous is that be­ing a tar­get of a dis­in­for­ma­tion cam­paign is “usu­ally a plea­sur­able ex­pe­ri­ence”, be­cause you are be­ing fed con­tent with which you are likely to agree. “You are be­ing guided through some­thing that you want to be true,” Wylie says.

To build an in­sur­gency, he ex­plains, you first tar­get peo­ple who are more prone to hav­ing er­ratic traits, para­noia or con­spir­a­to­rial think­ing, and get them to “like” a group on so­cial me­dia. They start en­gag­ing with the con­tent, which may or may not be true; ei­ther way “it feels good to see that in­for­ma­tion”.

When the group reaches 1,000 or 2,000 mem­bers, an event is set up in the lo­cal area. Even if only 5 per cent show up, “that’s 50 to 100 peo­ple flood­ing a lo­cal cof­fee shop”, Wylie says. This, he adds, val­i­dates their opin­ion be­cause other peo­ple there are also talk­ing about “all th­ese things that you’ve been see­ing on­line in the depths of your den and get­ting an­gry about”.

Peo­ple then start to be­lieve the rea­son it’s not shown on main­stream news chan­nels is be­cause “they don’t want you to know what the truth is”. As Wylie sums it up: “What started out as a fan­tasy on­line gets ported into the tem­po­ral world and be­comes real to you be­cause you see all th­ese peo­ple around you.”

Some conservatives have ar­gued that the Trump cam­paign has been un­fairly crit­i­cised for its use of data, while for­mer Pres­i­dent Barack Obama and his dig­i­tal agency Blue State Dig­i­tal were lauded for their use of so­cial-me­dia data in his suc­cess­ful 2008 elec­tion cam­paign.

But Wylie, who has worked with Obama’s for­mer na­tional di­rec­tor of tar­get­ing, claims the two cam­paigns took dif­fer­ent ap­proaches. For ex­am­ple, the Obama cam­paign used data to iden­tify peo­ple who were el­i­gi­ble to vote but had not reg­is­tered.

“When the Obama cam­paign put out in­for­ma­tion, it was clear it was a cam­paign ad, and the mes­sag­ing, within the realm of pol­i­tics, was hon­est and gen­uine. The Obama cam­paign did not use co­er­cive, ma­nip­u­la­tive dis­in­for­ma­tion as the ba­sis of its cam­paign, full stop. So, it’s a false equiv­a­lency and peo­ple who say that [it is equiv­a­lent] don’t re­ally un­der­stand what they’re talk­ing about.”

There’s a dif­fer­ence be­tween per­sua­sion, and ma­nip­u­la­tion and co­er­cion, he adds – and be­tween an opin­ion and prov­able dis­in­for­ma­tion. “Data is morally neu­tral, in the same way that I can take a knife and hand it to a Miche­lin-starred chef to make the most amaz­ing meal of your life, or I can mur­der some­one with it. The tool is morally neu­tral, it’s the ap­pli­ca­tion that mat­ters,” he says.

Psy­cho­graphic po­ten­tial

One such ap­pli­ca­tion was Cam­bridge An­a­lyt­ica’s use of psy­cho­graphic pro­fil­ing, a form of seg­men­ta­tion that will be fa­mil­iar to mar­keters, al­though not in com­mon use.

The com­pany used the OCEAN model, which judges peo­ple on scales of the Big Five per­son­al­ity traits: open­ness to ex­pe­ri­ences, con­sci­en­tious­ness, ex­traver­sion, agree­able­ness and neu­roti­cism.

Wylie be­lieves the method could be use­ful in the com­mer­cial space. For ex­am­ple, a fash­ion brand that cre­ates bold, colour­ful, pat­terned clothes might want to seg­ment wealthy woman by ex­tro­ver­sion be­cause they will be more likely to buy bold items, he says.

Scep­tics say Cam­bridge An­a­lyt­ica’s ap­proach may not be the dark magic that Wylie claims. In­deed, when speak­ing to Cam­paign in June 2017, Nix un­char­ac­ter­is­ti­cally played down the method, claim­ing the com­pany used “pretty bland data in a pretty en­ter­pris­ing way”.

But Wylie ar­gues that peo­ple un­der­es­ti­mate what al­go­rithms al­low you to do in pro­fil­ing. “I can take pieces of in­for­ma­tion about you that seem in­nocu­ous, but what I’m able to do with an al­go­rithm is find pat­terns that cor­re­late to un­der­ly­ing psy­cho­log­i­cal pro­files,” he ex­plains.

“I can ask whether you lis­ten to Justin Bieber, and you won’t feel like I’m in­vad­ing your pri­vacy. You aren’t nec­es­sar­ily aware that when you tell me what mu­sic you lis­ten to or what TV shows you watch, you are telling me some of your deep­est and most per­sonal at­tributes.”

This is where mat­ters stray into the ques­tion of ethics. Wylie be­lieves that as long as the com­mu­ni­ca­tion you are send­ing out is clear, not co­er­cive or ma­nip­u­la­tive, it’s fine, but it all de­pends on con­text. “If you are a beauty com­pany and you use facets of neu­roti­cism – which Cam­bridge An­a­lyt­ica did – and you find a seg­ment of young women or men who are more prone to body dys­mor­phia, and one of the proac­tive ac­tions they take is to buy more skin cream, you are ex­ploit­ing some­thing that is un­healthy for that per­son and do­ing dam­age,” he says. “The ethics of us­ing psy­cho­me­t­ric data re­ally de­pend on whether it is pro­por­tional to the ben­e­fit and utility that the cus­tomer is get­ting.”

Cre­ativ­ity trumps data

This also means us­ing cau­tion over how much data is be­ing amassed. Ad­land must take re­spon­si­bil­ity for its in­sa­tiable de­sire for data, and the pres­sure it ap­plies to so­cial-me­dia com­pa­nies to pro­vide it. Its usual de­fence is the more data it has, the bet­ter, be­cause con­sumers don’t ob­ject to per­son­alised ads.

Wylie dis­agrees. If, he says, a cross­word app uses data to per­son­alise your ex­pe­ri­ence, but as well as ba­sic in­for­ma­tion also har­vests your re­li­gion, sex­ual ori­en­ta­tion, text mes­sages and pho­tos, this is dis­pro­por­tion­ate to the value it pro­vides.

“You can cre­ate some­thing that’s rel­e­vant with­out that amount of in­for­ma­tion, and you can have a lot of in­for­ma­tion and cre­ate some­thing that’s not rel­e­vant.”

Wylie ar­gues that ob­ses­sion with data can stran­gle cre­ativ­ity. “Data in­forms you, it doesn’t tell you what to do. You [as a hu­man] should al­ways un­der­stand what you should do,” he says.

“If you work in a cre­ative team, you shouldn’t have to do some­thing be­cause an al­go­rithm said so. What makes us dif­fer­ent from an­i­mals? It’s the fact that we’ve got cul­ture. We paint things, we lis­ten to mu­sic, we watch TV, we wear cool clothes. Th­ese are all of the things that lit­er­ally make us hu­man and make life worth liv­ing. So, the idea of erod­ing that be­cause some data­base or neu­ral net said so is just rub­bish. The com­puter can’t imag­ine a sit­u­a­tion that is dif­fer­ent from what it has ob­served.”

Wylie con­tends that data should be used to help cre­atives by find­ing niche au­di­ences, for ex­am­ple.

He also be­lieves brand-build­ing should sit along­side tar­get­ing for best ef­fect. “There is a role for cre­at­ing universal nar­ra­tives for ev­ery­one to un­der­stand even if they’re not in your mar­ket.”

The son of a doc­tor and a psy­chi­a­trist, Wylie grew up in British Columbia. At school he was bul­lied and di­ag­nosed with ADHD and dyslexia. He left at 16 with­out qual­i­fi­ca­tions. But by the age of 20 he had worked for the leader of the op­po­si­tion in Canada, taught him­self to code and moved to Lon­don to study law at LSE. He was work­ing on a PhD in fash­ion-trend fore­cast­ing when he en­coun­tered psy­cho­graphic pro­fil­ing re­search. But his cu­rios­ity has turned sour.

A bleak fu­ture?

Wylie is con­cerned that tech de­vel­op­ments – such as the rise of AI – could fun­da­men­tally

dam­age so­ci­ety. Google Home has re­cently launched an op­tion to give only good news to its users, for in­stance.

“What they are ac­tu­ally start­ing to do is warp that per­son’s per­spec­tive from the very be­gin­ning of their day,” Wylie points out. Once you have AI in ev­ery part of your life, it will be ev­ery­where mak­ing de­ci­sions about you and for you.

Wylie be­lieves we could get to the point where AI re­places cre­atives in 20 years. “If your def­i­ni­tion of cre­ativ­ity is the gen­er­a­tion of novel out­puts, then you can have ‘cre­ative al­go­rithms’,” he says. “This is why as a com­mu­nity we need to come up with prin­ci­ples of how to en­gage with tech­nol­ogy. Just be­cause we can do some­thing doesn’t mean we should.”

He adds: “If we re­place ev­ery­one with robots, what’s the point of hu­man­ity, then? Shall we all just sit in those float­ing chairs they have in the film WALL-E and be fed through a tube and en­ter­tained through AI-gen­er­ated TV shows that are hy­per-per­son­alised to my pro­file? What a ter­ri­ble fu­ture that would be, right? We shouldn’t be en­deav­our­ing to re­place hu­man cre­ativ­ity with ar­ti­fi­cial cre­ativ­ity.”

A glim­mer of hope

Aside from repri­ori­tis­ing cre­ativ­ity over data, Wylie is adamant that reg­u­la­tion is the an­swer to end im­moral prac­tices on the in­ter­net.

“As a so­ci­ety we reg­u­late things we come into con­tact with that could cause us harm, such as air travel, doc­tors and elec­tric­ity. Cur­rently, soft­ware tech­nol­ogy, so­cial me­dia and on­line ad­ver­tis­ing is the Wild West,” he says.

“You eat food four or five times a day, you check your phone on av­er­age 150 times a day. Peo­ple sleep with their phones more than they sleep with peo­ple,” he adds. “The fact that peo­ple are en­gag­ing so much more now with ad­ver­tis­ing and on­line con­tent war­rants a dis­cus­sion on whether there should be statu­tory rules that are en­force­able as to the con­duct and be­hav­iour both of so­cial-me­dia and tech plat­forms and the ad­ver­tis­ers that use them.”

Wylie ar­gues that reg­u­la­tion is not a bad thing for com­mer­cial vi­a­bil­ity. Af­ter all, seat belts and air bags haven’t stopped peo­ple buy­ing cars. It will, he says, also help cre­ate con­sumer trust and con­fi­dence in the long run and pre­vent a back­lash.

It will also cre­ate a level play­ing field, where those that be­have eth­i­cally are not at a dis­ad­van­tage if com­peti­tors do not ad­here to the same prin­ci­ples.

He adds that: “A lot of tech com­pa­nies have their backs up. They’re like a dog in the cor­ner. They’re go­ing through th­ese ex­is­ten­tial con­ver­sa­tions like ‘OMG what’s hap­pen­ing?’” He goes on to ar­gue that the sec­tor re­lies on ev­ery­one be­hav­ing well to main­tain it­self. “If I were them, I’d be talk­ing about how we can help each other do bet­ter.”

The bar­ri­ers to reg­u­la­tion in­clude the in­ter­na­tional na­ture of tech com­pa­nies, the con­cern that gov­ern­ments are too far be­hind the tech com­pa­nies and that con­sumers don’t re­ally care about pri­vacy.

Wylie re­buts each of th­ese. There are com­mon rules for other in­ter­na­tional in­dus­tries – such as reg­u­lat­ing air­port codes, aero­planes tak­ing off and land­ing in dif­fer­ent coun­tries and send­ing post around the world. He de­scribes the sug­ges­tion that MPs don’t un­der­stand the in­dus­try well enough to act mean­ing­fully as a false ar­gu­ment.

He con­tin­ues: “Tell me what con­gress­man or MP un­der­stands how aero­planes fly or can­cer medicines work, and what is safe and not safe? Or what is the ap­pro­pri­ate level of pes­ti­cides to use on farms? They don’t. Th­ese are all highly tech­ni­cal, highly com­pli­cated, ever-mov­ing in­dus­tries, and be­fore they were reg­u­lated they were us­ing the same ar­gu­ments.”

Clashes with Face­book

Wylie is op­posed to self-reg­u­la­tion, be­cause in­dus­tries won’t be­come con­sumer cham­pi­ons – they are, he says, too con­flicted. “Face­book has known about what Cam­bridge An­a­lyt­ica was up to from the very be­gin­ning of those projects,” Wylie claims. “They were no­ti­fied, they au­tho­rised the ap­pli­ca­tions, they were given the terms and con­di­tions of the app that said ex­plic­itly what it was do­ing. They hired peo­ple who worked on build­ing the app. I had le­gal cor­re­spon­dence with their lawyers where they ac­knowl­edged it hap­pened as far back as 2016.” He wants to cre­ate a set of

" THERE ARE VERY FEW EX­AM­PLES IN HU­MAN HIS­TORY OF IN­DUS­TRIES WHERE PEO­PLE THEM­SELVES BE­COME PROD­UCTS AND THOSE ARE SCARY IN­DUS­TRIES - SLAV­ERY AND THE SEX TRAD. AND NOW, WE HAVE SO­CIAL ME­DIA"

en­dur­ing prin­ci­ples that are handed over to a tech­ni­cally com­pe­tent reg­u­la­tor to en­force. “Cur­rently, the in­dus­try is not re­spond­ing to some pretty fun­da­men­tal things that have hap­pened on their watch. So I think it is the right place for govern­ment to step in,” he adds. Face­book in par­tic­u­lar, he ar­gues is “the most ob­sti­nate and bel­liger­ent in recog­nis­ing the harm that has been done and ac­tu­ally do­ing some­thing about it”.

In words that might res­onate with mar­keters burned by Face­book’s mea­sure­ment is­sues, he says the com­pany needs to be more proac­tive about fix­ing is­sues rather than “re­quir­ing a hell of a lot of pub­lic pres­sure be­fore it does any­thing”.

He adds: “Face­book needs to ac­knowl­edge it has an in­sti­tu­tional cul­tural prob­lem it needs to ad­dress. I re­ally hope it can get to a place where it will ac­tively fix it­self.”

Since our in­ter­view with Wylie, Face­book has hired the for­mer UK deputy prime min­is­ter Nick Clegg as its com­mu­ni­ca­tions and global af­fairs head. Wylie has ac­cused Clegg of “sell­ing out”.

But what re­spon­si­bil­ity do con­sumers have in all of this? Ab­so­lutely none, ac­cord­ing to Wylie.

“What re­spon­si­bil­ity does some­body have walk­ing into a dan­ger­ous build­ing, or when pre­scribed medicine by their doc­tor? Do they in­spect the en­gi­neer­ing when they step on to a plane? They don’t, be­cause they shouldn’t. It’s not the role of the con­sumer to make sure that they’re safe, it’s the role of in­dus­try who’s prof­it­ing from them.

“I don’t want to just at­tack Face­book. There’s a real prob­lem within Sil­i­con Val­ley,” he says. He ex­plains that tech com­pa­nies re­ward friendly “white hat” hack­ers with money for bring­ing sys­tem vul­ner­a­bil­i­ties to their at­ten­tion. “But when a jour­nal­ist, whistle­blower or civil so­ci­ety does it, and they do it in pub­lic, there are threats, le­gal threats.”

At this point in the in­ter­view Wylie be­comes an­gry: “[Face­book] sent me threat­en­ing let­ters. Then they de­manded all of my per­sonal de­vices be­cause they think they are the po­lice for them­selves. I said ‘no, I can’t be­cause I’ve handed over the ev­i­dence to the po­lice, who are the law­ful and right­ful author­ity to in­ves­ti­gate Face­book, not you.’

“Be­cause I re­fused to give in to their le­gal threats and hand over my de­vices and in­for­ma­tion that would in­ter­fere with a po­lice in­ves­ti­ga­tion, that’s why they banned me.”

He says an ad­di­tional ban by In­sta­gram, “shows the dis­pro­por­tion­ate mar­ket power that [Face­book]can ex­ert. The fact that a whole dif­fer­ent com­pany can ban some­one with no due process for some­thing that doesn’t in­volve it at all.”

Al­though Face­book de­clined to com­ment for this piece, it re­ferred us to pre­vi­ous state­ments it has made on the is­sue. In th­ese, it says it banned Wylie be­cause, like Cam­bridge An­a­lyt­ica, he re­ceived a copy of the quiz’s Face­book data, which was a breach of Face­book’s terms and con­di­tions.

De­spite this, Wylie in­sists he is not against so­cial me­dia. “I don’t be­lieve that peo­ple should have to delete Face­book. I’m not a sup­porter of #DeleteFace­book be­cause it’s like say­ing if you don’t want to get elec­tro­cuted, get rid of elec­tric­ity. It’s stupid. No, de­mand bet­ter stan­dards for your elec­tric­ity so you don’t get elec­tro­cuted,” he says.

“So­cial me­dia is now an es­sen­tial part of most peo­ple’s lives. You can’t ap­ply for most jobs now with­out LinkedIn. You can’t com­mu­ni­cate prac­ti­cally with friends if you don’t have a form of so­cial me­dia. What job can you get if you say to an em­ployer: ‘I’m re­ally great but be­cause I want to en­force my pri­vacy stan­dards and main­tain my men­tal health, I refuse to use any­thing that touches Google’s ser­vices’?

“So the so­lu­tion is not to delete th­ese plat­forms, or at­tack them and make them the en­emy, it’s to make sure they are do­ing their job to make a safe en­vi­ron­ment for peo­ple.”

Newspapers in English

Newspapers from UAE

© PressReader. All rights reserved.