The Sunday Telegraph

Facebook was warned of data risks 7 years ago

Data breaches aside, internet technology can be a rational and powerful force for good in the world

- By James Titcomb TECHNOLOGY EDITOR

FACEBOOK was warned that its users were at risk two years before the data of 50 million people was accessed by a controvers­ial political firm, The Sunday

Telegraph can disclose.

In 2011, the social media giant’s European regulator cautioned that it was failing to ensure that data was protected when passed to third-party software developers.

Facebook responded with minor changes to the way users were notified about how apps were gathering data but did not fully block the practice for another four years. The discovery of the warning raises new questions about why Facebook did not act sooner to protect users’ private informatio­n.

In 2013, two years after the warning, Aleksandr Kogan, a Cambridge professor, used a personalit­y quiz on Facebook to obtain data from 50 million users without their knowledge. Prof Kogan then allegedly passed the data to Cambridge Analytica the following year, in violation of Facebook’s rules and without the company knowing.

The Irish Data Protection Commission­er (DPC) initially warned Facebook of this oversight in December 2011, claiming it could not “assure users of the security of their data”.

Facebook has its European base in Ireland, making its Dublin subsidiary responsibl­e for the handling of all users’ data in Europe. The DPC is its most powerful regulator outside the US.

The latest disclosure comes after a whistleblo­wer raised concerns that millions more users may have had their data compromise­d. Sandy Parakilas, a former Facebook manager, last week said the company had no way of knowing if data was misused once it had been accessed by third-party apps and would take the apps “at their word”. He said “personal identifiab­le data was basically allowed to leave Facebook”.

In the audit of Facebook Ireland’s data protection practices, the DPC said: “We do not consider that reliance on developer adherence to best practice or stated policy in certain cases is sufficient to ensure security of user data.

“We do note … the proactive monitoring and action against apps which breach platform policies. However, this is not considered sufficient to assure the security of data once users have third-party apps enabled.”

Amid growing concerns that the data obtained by Cambridge Analytica may simply be the tip of the iceberg, Facebook announced an investigat­ion into whether others might have used the same techniques to obtain data.

Mark Zuckerberg, the Facebook chief executive, said the company would audit any app that displayed “suspicious activity” to see if informatio­n had been stolen.

He has been called to give evidence to the Digital, Culture, Media and Sport select committee and has been given a deadline of tomorrow to respond.

A Facebook spokesman said: “Thirdparty apps built on Facebook was the subject of detailed examinatio­n … by the Irish Data Protection Commission­er in 2011-2012. In September 2012, they acknowledg­ed the progress we had made and in 2014 we announced we were changing the entire platform.”

If you’re reading this article online, take a look at the advertisem­ents. The chances are that they have been aimed specifical­ly at you. You have painted a remarkably detailed self-portrait in cyberspace. The sites you visit, the terms you search for, the wares you browse – all tell the advertiser what sort of things you might be interested in buying.

Most of us are vaguely aware that this goes on, and we don’t much mind. Sometimes, the effect is comical: when I was writing about Iceland’s recent general election, for example, I kept being told about special offers from the supermarke­t of that name. But I like getting advertisem­ents for, say, Shakespear­e’s plays. Targeting generally works.

In theory, none of our personal informatio­n should be made public. Algorithms aim advertisem­ents at categories of people, not at identified individual­s. The same principle applies to political targeting, which is, after all, simply one more kind of advertisin­g.

The algorithms might work out that, say, members of wine clubs are more likely than average to be Conservati­ves. But such metadata should not result in anyone’s online history becoming public.

Over the past week, a row about political targeting has blown up, conflating two different things.

First, there was a data breach, involving the misappropr­iation of informatio­n from Facebook. Secondly, there is anger among some journalist­s at the data-led techniques used by Donald Trump’s campaign. Columnists who resent Trump’s victory are mashing together the data breach and the wider use of algorithms as a campaignin­g tool, but they are two completely separate issues.

The abuse of Facebook data was at the very least a violation of trust, and may also be a violation of the law. The company accepts that it was in error and is putting new safeguards in place.

What is more interestin­g, though, and more upsetting to some, is the non-scandalous aspect of the whole affair, namely the way in which the internet is transformi­ng the nature of political campaigns.

For a long time, political activists have known that certain traits cluster together, and that people’s party preference­s can be inferred from apparently unrelated characteri­stics. For example, 30 years of canvassing have taught me that there is some correlatio­n between owning a big dog and voting Tory. It’s not a total correlatio­n, obviously, but when I hear a deep-throated bark on the other side of the door, I expect a positive reaction to my blue rosette.

By the same token, caravan ownership tends to be a Lib Dem signifier. Again, it’s not a hard-andfast law. Rather, it’s what we might pretentiou­sly call a heuristic: an observed rule-of-thumb that holds often enough to be useful.

What I have learnt over three decades of knocking on doors, an algorithm can do instantane­ously and with vastly greater accuracy, drawing as it can upon vastly more numerous data. This is an unsettling thought, even if we assume that our privacy is secure.

After all, we like to imagine that we cast our votes rationally – that we look at the people and policies on offer, weigh our options and come down in favour of what we see as the national interest.

It is rather alarming to learn that a computer can look at what we buy, what vocabulary we use, where we go on holiday and then predict, with remarkable accuracy, where we will cross our ballots.

The Israeli historian Yuval Noah Harari goes further. He observes that, as well as telling us how we will vote, algorithms might tell us how we should vote.

Our conscious minds are subject to all manner of distortion­s. For example, we tend to give undue weight to recent events, and so might overlook four years of disappoint­ment with a government as long as it pleases us in the run-up to polling day. We are irrational­ly change-averse, putting too much store in incumbency. When we have colds, we tend to become slightly more conservati­ve. And so on.

A computer could factor all these things in and say, in effect, “I have known you since you first went online. I know your interests and your conscious and unconsciou­s desires. I know when you are prone to making mistakes. For example, I know that you over-rate physical looks, and that this makes you give the leader of Party X the benefit of the doubt. But – though I also know that you don’t want to hear this right now – you should vote for Party Y. Remember that I have also known both leaders all their lives, and all their candidates, and that I am factoring this knowledge into my advice.”

Many people find this developmen­t far more disquietin­g than the improper harvesting of data. It strikes at our very notion of what it means to be human.

If machines can judge our political interests better than we can, where else can they outperform us? Might algorithms take over from our doctors, our lawyers, our teachers, our composers?

The answer, in each case, is that they are already doing so.

Remember, though, that all these developmen­ts tend to make our lives longer, healthier and happier. If they didn’t, they wouldn’t happen.

All change is unsettling, and the scurry of technologi­cal advance – what Robert Colvile calls the Great Accelerati­on – is especially so. And yet, by Heaven, what an exhilarati­ng time to be alive.

Remember though that all these developmen­ts tend to make our lives longer, healthier and happier

 ??  ??

Newspapers in English

Newspapers from United Kingdom