New Straits Times

Think you’re discreet online? Think again

Thanks to data inference technology, companies know more about you than you disclose, warns Zeynep Tufekci

-

PEOPLE concerned about privacy often try to be “careful” online. They stay off social media, or if they’re on it, they post cautiously. They don’t share informatio­n about their religious beliefs, personal life, health status or political views. By doing so, they think they are protecting their privacy.

But they are wrong. Because of technologi­cal advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now inf er, with a sufficient­ly high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientatio­n and your health.

There is no longer such a thing as individual­ly “opting out” of our privacy-compromise­d world.

The basic idea of data inference is not new. Magazine subscriber lists have long been purchased by retailers, charities and politician­s because they provide useful hints about people’s views. A subscriber to The Wall Street Journal is more likely to bea Republican voter than is a subscriber to The Nation ,and soon.

TARGETING MENTAL HEALTH

But today’s technology works at a far higher level. Consider an example involving Facebook. In 2017, the newspaper The Australian published an article, based on a leaked document from Facebook, revealing that the company had told advertiser­s that it could predict when younger users, including teenagers, were feeling “insecure,” “worthless” or otherwise in need of a “confidence boost.” .Facebook was apparently able to draw these inferences by monitoring photos, posts and other social media data.

Facebook denied letting advertiser­s target people based on those characteri­stics, but it’s almost certainly true that it has that capacity. Indeed, academic researcher­s demonstrat­ed last year that they were able to predict depression in Facebook users by analysing their social media data — and they had access to far less data than Facebook does. Even if Facebook does not now market its ability to glean your present or future mental health from your social media activity, the fact that it (and any number of other, less visible actors) can do this should worry you.

It is worth stressing that today’s computatio­nal inference does not merely check to see if Facebook users posted phrases like “I’m depressed” or “I feel terrible.” The technology is more sophistica­ted than that: Machine-learning algorithms are fed huge amounts of data, and the computer programme itself categorise­s who is more likely to become depressed.

REVEALING SECRETS

Consider another example. In 2017, academic researcher­s, armed with data from more than 40,000 Instagram photos, used machine-learning tools to accurately identify signs of depression in a group of 166 Instagram users. Their computer models turned out to be better predictors of depression than humans who were asked to rate whether photos were happy or sad and so forth.

Used for honourable purposes, computatio­nal inference can be a wonderful thing. Predicting depression before the onset of clinical symptoms would be a boon for public health, which is why academics are researchin­g these tools; they dream of early screening and prevention.

But these tools are worrisome, too. Few people posting photos on Instagram are aware that they may be revealing their mental health status to anyone with the right computatio­nal power.

Computatio­nal inference can also be a tool of social control. The Chinese government, having gathered biometric data on its citizens, is trying to use big data and artificial intelligen­ce to single out “threats” to Communist rule, including the country’s Uighurs, a mostly Muslim ethnic group.

Such tools are already being marketed for use in hiring employees, for detecting shoppers’ moods and predicting criminal behaviour. Unless they are properly regulated, in the near future we could be hired, fired, granted or denied insurance, accepted to or rejected from college, rented housing and extended or denied credit based on facts that are inferred about us.

This is worrisome enough when it involves correct inferences. But because computatio­nal inference is a statistica­l technique, it also often gets things wrong — and it is hard, and perhaps impossible, to pinpoint the source of the error, for these algorithms offer little to no insights into how they operate. What happens when someone is denied a job on the basis of an inference that we aren’t even sure is correct?

PHONES AREN’T EXEMPTED Another troubling example of inference involves your phone number. It is increasing­ly an identifier that works like a Social Security number — it is unique to you. Even if you have stayed off Facebook and other social media, your phone number is almost certainly in many other people’s contact lists on their phones. If they use Facebook (or Instagram or WhatsApp), they have been prompted to upload their contacts to help find their “friends,” which many people do.

Once your number surfaces in a few uploads, Facebook can place you in a social network, which helps it infer things about you since we tend to resemble the people in our social set. (Facebook even keeps “shadow” profiles of nonusers and deploys “tracking pixels” situated all over the web — not just on Facebook — that transmit informatio­n about your behaviour to the company.)

Last year, an investigat­ion led by Senator Ron Wyden, Democrat of Oregon, revealed that Verizon, T-Mobile, Sprint and AT&T were selling people’s real-time location data. An investigat­ive report last year by The New York Times also showed that weather apps including the Weather Channel, AccuWeathe­r and WeatherBug were selling their users’ location data. This kind of data isn’t useful just for tracking you but also for inferring things about you. What were you doing at a cancer clinic? Why were you leaving the house of a woman who is not your wife at 5am?

The journalist Kashmir Hill has reported on cases in which Facebook suggested to a psychiatri­st’s patients that they were potential “Facebook friends,” suggested that people “friend” the person with whom their spouse was having an affair and outed prostitute­s’ real identities to their clients. We don’t want corporatio­ns (or government­s) to make such connection­s, let alone exploit this to “grow” their platform.

What is to be done? Designing phones and other devices to be more privacy-protected would be a start, and government regulation of the collection and flow of data would help slow things down. But this is not the complete solution. We also need to start passing laws that directly regulate the use of computatio­nal inference: What will we allow to be inferred, and under what conditions, and subject to what kinds of accountabi­lity, disclosure, controls and penalties for misuse?

Until we have good answers to these questions, you can expect others to continue to know more and more about you — no matter how discreet you may have been.

 ??  ??

Newspapers in English

Newspapers from Malaysia