ered. However, if we are to be effective against big data algorithms that are smart enough to accurately cull out a single strand of information from vast volumes of random data, we need to go much further than that. To fool modern pattern matching technologies, we will need to generate noise that is so similar to the data that is sought to be extracted, as to make identifying that data ambiguous and difficult to exploit.
Social media services encourage us to share how we feel, allowing us to like or offer other “reactions” to posts in our network. While this might seem like an innocuous way of patting a friend on the back, the algorithms that drive these services use our reactions to improve their understanding of us so that they can serve us more information about things we like.
This improved understanding helps them target products and services more accurately toward us and has become so accurate that law enforcement agencies are increasingly using social media data to help detect potential threats or head off anti-national behaviour.
This level of intrusion can be discomforting. In response, some people have already resorted to obfuscating their social graph by deploying programs that randomly assign reactions to posts in ways that have no bearing to their actual feelings. This confuses the algorithms about what we like and dislike, effectively neutering their predictive capabilities. Other programs randomly click on all the products served on e-commerce websites, confusing the shopping algorithms into building inaccurate profiles of your preferences. Still others are aimed at fooling ad-engines that serve you tailor-made advertisements by randomly showing interest in services and products that have no bearing to what you like.
All these technologies generate noise, random behaviour patterns that fool algorithms into thinking you are someone other than who you really are. By cloaking your true identity in this noise, they offer you privacy and seclusion even though you remain out there in the open.
These principles are slowly being extended into the real world. Facial recognition algorithms are designed to identify faces by searching for specific shapes and reference features in digital images.
Given the ubiquity of CCTV cameras and webcams, the threat that they can be used to identify us wherever we go is real. To combat this, people have begun to apply makeup, cut their hair in special ways or wear masks specially designed to confuse these algorithms by obfuscating the very distinctive features that these algorithms are trained to identify.
A couple of years ago, I tried an obfuscation experiment of my own.
I’ve long been irritated by the fact that apps such as Truecaller are able to identify me even though I haven’t signed up for their service. These applications scan their users’ address books and use the information in there to build up a database of all the phone numbers in the world.
As a result, even though I was never a registered user, if even one of my friends has signed up and shared his phone book with them, all my contact information was on their servers. As more and more of my friends signed up, the fact that my mobile phone number corresponded to my name was corroborated with greater certainty.
A couple of years ago, I decided to fight fire with fire—i registered myself as a user. In order to identify myself, I was asked to verify my identity with an OTP.
Once the validity of my mobile phone number had been confirmed, I entered a completely fictitious name into my user profile. Since my mobile number had been authenticated using OTP verification, the server believed the information that I had just keyed in over the evidence of all the hundreds of other address books in their server that carried my details.
So now, if you look up my mobile number you will get a result— just not the one you were expecting.
But it is me. Hiding out there in plain sight.
Rahul Matthan is a partner at Trilegal. Ex Machina is a column on technology, law and everything in between.
His Twitter handle is @matthan