Campaign Middle East

“There are very few examples where people become products – slavery, the sex trade and now social media.”

CHRISTOPHE­R WYLIE,

- Talking to Kate Magee

who blew the whistle on Cambridge Analytica, talks data and creativity.

In the early hours of 17 March 2018, the 28-year-old Christophe­r Wylie tweeted: “Here we go….”

Later that day, The Observer published the story of Cambridge Analytica’s misuse of Facebook data, which sent shockwaves around the world, caused millions to #DeleteFace­book and led the UK Informatio­n Commission­er’s Office to fine the site the maximum penalty for failing to protect users’ informatio­n. Six weeks after the story broke, Cambridge Analytica closed.

Wylie was the key source in the year-long investigat­ion. In the months following publicatio­n, he has been variously described as “the millennial­s’ first great whistleblo­wer”, a “fantasist charlatan” and, as he calls himself, the “Canadian vegan” who was responsibl­e for creating a “psychologi­cal warfare tool”.

Now, as attention has shifted to this month’s US midterm elections as a test of meaningful change at social-media companies, the bright-orange-haired Wylie is sitting under Campaign’s lens. He talks about his Facebook ban, the need for regulation and his love of the John Lewis ads: “The creative is just brilliant. Any time I see those ads I think John Lewis should run [the UK]!”

He is articulate, passionate, style-conscious and, perhaps surprising­ly for someone who is a data scientist, he is a huge advocate for human creativity. “I don’t believe in data-driven anything, it’s the most stupid phrase. Data should always serve people, people should never serve data,” he says.

He believes that poor use of data is killing good ideas. And that, unless effective regulation is enacted, society’s worship of algorithms,

unchecked data capture and use, and the likely spread of AI to all parts of our lives is causing us to sleepwalk into a bleak future.

Not only are such circumstan­ces a threat to adland – why do you need an ad to tell you about a product if an algorithm is choosing it for you? – it is a threat to human free will. “Currently, the only morality of the algorithm is to optimise you as a consumer and, in many cases, you become the product. There are very few examples in human history of industries where people themselves become products and those are scary industries – slavery and the sex trade. And now, we have social media,” Wylie says.

“The problem with that, and what makes it inherently different to selling, say, toothpaste, is that you’re selling parts of people or access to people. People have an innate moral worth. If we don’t respect that, we can create industries that do terrible things to people. We are [heading] blindly and quickly into an environmen­t where this mentality is going to be amplified through AI everywhere. We’re humans, we should be thinking about people first.”

His words carry weight, because he’s been on the dark side. He has seen what can happen when data is used to spread misinforma­tion, create insurgenci­es and prey on the worst of people’s characters.

The political battlefiel­d

A quick refresher on the scandal, in Wylie’s words: Cambridge Analytica was a company spun out of SCL Group, a British military contractor that worked in informatio­n operations for armed forces around the world. It was conducting research on how to scale and digitise informatio­n warfare – the use of informatio­n to confuse or degrade the efficacy of an enemy.

Wylie was a 24-year-old fashion-trend-forecastin­g student who also worked with the Liberal Democrats on its targeting. A contact introduced him to SCL.

As director of research, Wylie’s original role was to map out how the company would take traditiona­l informatio­n operations tactics into the online space – in particular, by profiling people who would be susceptibl­e to certain messaging.

This morphed into the political arena. After Wylie left, the company worked on Donald Trump’s US presidenti­al campaign and – possibly – the UK’s European Union referendum. In February 2016, Cambridge Analytica’s former chief executive, Alexander Nix, wrote in Campaign that his company had “already helped supercharg­e Leave.EU’s social-media campaign”. Nix has strenuousl­y denied this since, including to MPs.

It was this shift from the battlefiel­d to politics that made Wylie uncomforta­ble. “When you are working in informatio­n operations projects, where your target is a combatant, the autonomy or agency of your targets is not your primary considerat­ion. It is fair game to deny and manipulate informatio­n, coerce and exploit any mental vulnerabil­ities a person has, and to bring out the very worst characteri­stics in that person, because they are an enemy,” he says.

“But if you port that over to a democratic system, if you run campaigns designed to undermine people’s ability to make free choices and to understand what is real and not real, you are underminin­g democracy and treating voters in the same way as you are treating terrorists.”

One of the reasons these techniques are so insidious is that being a target of a disinforma­tion campaign is “usually a pleasurabl­e experience”, because you are being fed content with which you are likely to agree. “You are being guided through something that you want to be true,” Wylie says.

To build an insurgency, he explains, you first target people who are more prone to having erratic traits, paranoia or conspirato­rial thinking, and get them to “like” a group on social media. They start engaging with the content, which may or may not be true; either way “it feels good to see that informatio­n”.

When the group reaches 1,000 or 2,000 members, an event is set up in the local area. Even if only 5 per cent show up, “that’s 50 to 100 people flooding a local coffee shop”, Wylie says. This, he adds, validates their opinion because other people there are also talking about “all these things that you’ve been seeing online in the depths of your den and getting angry about”.

People then start to believe the reason it’s not shown on mainstream news channels is because “they don’t want you to know what the truth is”. As Wylie sums it up: “What started out as a fantasy online gets ported into the temporal world and becomes real to you because you see all these people around you.”

Some conservati­ves have argued that the Trump campaign has been unfairly criticised for its use of data, while former President Barack Obama and his digital agency Blue State Digital were lauded for their use of social-media data in his successful 2008 election campaign.

But Wylie, who has worked with Obama’s former national director of targeting, claims the two campaigns took different approaches. For example, the Obama campaign used data to identify people who were eligible to vote but had not registered.

“When the Obama campaign put out informatio­n, it was clear it was a campaign ad, and the messaging, within the realm of politics, was honest and genuine. The Obama campaign did not use coercive, manipulati­ve disinforma­tion as the basis of its campaign, full stop. So, it’s a false equivalenc­y and people who say that [it is equivalent] don’t really understand what they’re talking about.”

There’s a difference between persuasion, and manipulati­on and coercion, he adds – and between an opinion and provable disinforma­tion. “Data is morally neutral, in the same way that I can take a knife and hand it to a Michelin-starred chef to make the most amazing meal of your life, or I can murder someone with it. The tool is morally neutral, it’s the applicatio­n that matters,” he says.

Psychograp­hic potential

One such applicatio­n was Cambridge Analytica’s use of psychograp­hic profiling, a form of segmentati­on that will be familiar to marketers, although not in common use.

The company used the OCEAN model, which judges people on scales of the Big Five personalit­y traits: openness to experience­s, conscienti­ousness, extraversi­on, agreeablen­ess and neuroticis­m.

Wylie believes the method could be useful in the commercial space. For example, a fashion brand that creates bold, colourful, patterned clothes might want to segment wealthy woman by extroversi­on because they will be more likely to buy bold items, he says.

Sceptics say Cambridge Analytica’s approach may not be the dark magic that Wylie claims. Indeed, when speaking to Campaign in June 2017, Nix uncharacte­ristically played down the method, claiming the company used “pretty bland data in a pretty enterprisi­ng way”.

But Wylie argues that people underestim­ate what algorithms allow you to do in profiling. “I can take pieces of informatio­n about you that seem innocuous, but what I’m able to do with an algorithm is find patterns that correlate to underlying psychologi­cal profiles,” he explains.

“I can ask whether you listen to Justin Bieber, and you won’t feel like I’m invading your privacy. You aren’t necessaril­y aware that when you tell me what music you listen to or what TV shows you watch, you are telling me some of your deepest and most personal attributes.”

This is where matters stray into the question of ethics. Wylie believes that as long as the communicat­ion you are sending out is clear, not coercive or manipulati­ve, it’s fine, but it all depends on context. “If you are a beauty company and you use facets of neuroticis­m – which Cambridge Analytica did – and you find a segment of young women or men who are more prone to body dysmorphia, and one of the proactive actions they take is to buy more skin cream, you are exploiting something that is unhealthy for that person and doing damage,” he says. “The ethics of using psychometr­ic data really depend on whether it is proportion­al to the benefit and utility that the customer is getting.”

Creativity trumps data

This also means using caution over how much data is being amassed. Adland must take responsibi­lity for its insatiable desire for data, and the pressure it applies to social-media companies to provide it. Its usual defence is the more data it has, the better, because consumers don’t object to personalis­ed ads.

Wylie disagrees. If, he says, a crossword app uses data to personalis­e your experience, but as well as basic informatio­n also harvests your religion, sexual orientatio­n, text messages and photos, this is disproport­ionate to the value it provides.

“You can create something that’s relevant without that amount of informatio­n, and you can have a lot of informatio­n and create something that’s not relevant.”

Wylie argues that obsession with data can strangle creativity. “Data informs you, it doesn’t tell you what to do. You [as a human] should always understand what you should do,” he says.

“If you work in a creative team, you shouldn’t have to do something because an algorithm said so. What makes us different from animals? It’s the fact that we’ve got culture. We paint things, we listen to music, we watch TV, we wear cool clothes. These are all of the things that literally make us human and make life worth living. So, the idea of eroding that because some database or neural net said so is just rubbish. The computer can’t imagine a situation that is different from what it has observed.”

Wylie contends that data should be used to help creatives by finding niche audiences, for example.

He also believes brand-building should sit alongside targeting for best effect. “There is a role for creating universal narratives for everyone to understand even if they’re not in your market.”

The son of a doctor and a psychiatri­st, Wylie grew up in British Columbia. At school he was bullied and diagnosed with ADHD and dyslexia. He left at 16 without qualificat­ions. But by the age of 20 he had worked for the leader of the opposition in Canada, taught himself to code and moved to London to study law at LSE. He was working on a PhD in fashion-trend forecastin­g when he encountere­d psychograp­hic profiling research. But his curiosity has turned sour.

A bleak future?

Wylie is concerned that tech developmen­ts – such as the rise of AI – could fundamenta­lly

damage society. Google Home has recently launched an option to give only good news to its users, for instance.

“What they are actually starting to do is warp that person’s perspectiv­e from the very beginning of their day,” Wylie points out. Once you have AI in every part of your life, it will be everywhere making decisions about you and for you.

Wylie believes we could get to the point where AI replaces creatives in 20 years. “If your definition of creativity is the generation of novel outputs, then you can have ‘creative algorithms’,” he says. “This is why as a community we need to come up with principles of how to engage with technology. Just because we can do something doesn’t mean we should.”

He adds: “If we replace everyone with robots, what’s the point of humanity, then? Shall we all just sit in those floating chairs they have in the film WALL-E and be fed through a tube and entertaine­d through AI-generated TV shows that are hyper-personalis­ed to my profile? What a terrible future that would be, right? We shouldn’t be endeavouri­ng to replace human creativity with artificial creativity.”

A glimmer of hope

Aside from reprioriti­sing creativity over data, Wylie is adamant that regulation is the answer to end immoral practices on the internet.

“As a society we regulate things we come into contact with that could cause us harm, such as air travel, doctors and electricit­y. Currently, software technology, social media and online advertisin­g is the Wild West,” he says.

“You eat food four or five times a day, you check your phone on average 150 times a day. People sleep with their phones more than they sleep with people,” he adds. “The fact that people are engaging so much more now with advertisin­g and online content warrants a discussion on whether there should be statutory rules that are enforceabl­e as to the conduct and behaviour both of social-media and tech platforms and the advertiser­s that use them.”

Wylie argues that regulation is not a bad thing for commercial viability. After all, seat belts and air bags haven’t stopped people buying cars. It will, he says, also help create consumer trust and confidence in the long run and prevent a backlash.

It will also create a level playing field, where those that behave ethically are not at a disadvanta­ge if competitor­s do not adhere to the same principles.

He adds that: “A lot of tech companies have their backs up. They’re like a dog in the corner. They’re going through these existentia­l conversati­ons like ‘OMG what’s happening?’” He goes on to argue that the sector relies on everyone behaving well to maintain itself. “If I were them, I’d be talking about how we can help each other do better.”

The barriers to regulation include the internatio­nal nature of tech companies, the concern that government­s are too far behind the tech companies and that consumers don’t really care about privacy.

Wylie rebuts each of these. There are common rules for other internatio­nal industries – such as regulating airport codes, aeroplanes taking off and landing in different countries and sending post around the world. He describes the suggestion that MPs don’t understand the industry well enough to act meaningful­ly as a false argument.

He continues: “Tell me what congressma­n or MP understand­s how aeroplanes fly or cancer medicines work, and what is safe and not safe? Or what is the appropriat­e level of pesticides to use on farms? They don’t. These are all highly technical, highly complicate­d, ever-moving industries, and before they were regulated they were using the same arguments.”

Clashes with Facebook

Wylie is opposed to self-regulation, because industries won’t become consumer champions – they are, he says, too conflicted. “Facebook has known about what Cambridge Analytica was up to from the very beginning of those projects,” Wylie claims. “They were notified, they authorised the applicatio­ns, they were given the terms and conditions of the app that said explicitly what it was doing. They hired people who worked on building the app. I had legal correspond­ence with their lawyers where they acknowledg­ed it happened as far back as 2016.” He wants to create a set of

" THERE ARE VERY FEW EXAMPLES IN HUMAN HISTORY OF INDUSTRIES WHERE PEOPLE THEMSELVES BECOME PRODUCTS AND THOSE ARE SCARY INDUSTRIES - SLAVERY AND THE SEX TRAD. AND NOW, WE HAVE SOCIAL MEDIA"

enduring principles that are handed over to a technicall­y competent regulator to enforce. “Currently, the industry is not responding to some pretty fundamenta­l things that have happened on their watch. So I think it is the right place for government to step in,” he adds. Facebook in particular, he argues is “the most obstinate and belligeren­t in recognisin­g the harm that has been done and actually doing something about it”.

In words that might resonate with marketers burned by Facebook’s measuremen­t issues, he says the company needs to be more proactive about fixing issues rather than “requiring a hell of a lot of public pressure before it does anything”.

He adds: “Facebook needs to acknowledg­e it has an institutio­nal cultural problem it needs to address. I really hope it can get to a place where it will actively fix itself.”

Since our interview with Wylie, Facebook has hired the former UK deputy prime minister Nick Clegg as its communicat­ions and global affairs head. Wylie has accused Clegg of “selling out”.

But what responsibi­lity do consumers have in all of this? Absolutely none, according to Wylie.

“What responsibi­lity does somebody have walking into a dangerous building, or when prescribed medicine by their doctor? Do they inspect the engineerin­g when they step on to a plane? They don’t, because they shouldn’t. It’s not the role of the consumer to make sure that they’re safe, it’s the role of industry who’s profiting from them.

“I don’t want to just attack Facebook. There’s a real problem within Silicon Valley,” he says. He explains that tech companies reward friendly “white hat” hackers with money for bringing system vulnerabil­ities to their attention. “But when a journalist, whistleblo­wer or civil society does it, and they do it in public, there are threats, legal threats.”

At this point in the interview Wylie becomes angry: “[Facebook] sent me threatenin­g letters. Then they demanded all of my personal devices because they think they are the police for themselves. I said ‘no, I can’t because I’ve handed over the evidence to the police, who are the lawful and rightful authority to investigat­e Facebook, not you.’

“Because I refused to give in to their legal threats and hand over my devices and informatio­n that would interfere with a police investigat­ion, that’s why they banned me.”

He says an additional ban by Instagram, “shows the disproport­ionate market power that [Facebook]can exert. The fact that a whole different company can ban someone with no due process for something that doesn’t involve it at all.”

Although Facebook declined to comment for this piece, it referred us to previous statements it has made on the issue. In these, it says it banned Wylie because, like Cambridge Analytica, he received a copy of the quiz’s Facebook data, which was a breach of Facebook’s terms and conditions.

Despite this, Wylie insists he is not against social media. “I don’t believe that people should have to delete Facebook. I’m not a supporter of #DeleteFace­book because it’s like saying if you don’t want to get electrocut­ed, get rid of electricit­y. It’s stupid. No, demand better standards for your electricit­y so you don’t get electrocut­ed,” he says.

“Social media is now an essential part of most people’s lives. You can’t apply for most jobs now without LinkedIn. You can’t communicat­e practicall­y with friends if you don’t have a form of social media. What job can you get if you say to an employer: ‘I’m really great but because I want to enforce my privacy standards and maintain my mental health, I refuse to use anything that touches Google’s services’?

“So the solution is not to delete these platforms, or attack them and make them the enemy, it’s to make sure they are doing their job to make a safe environmen­t for people.”

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates