Hindustan Times (Delhi)

Understand the flip side to data personalis­ation

The personal informatio­n we give up in order to improve our user experience can be weaponised against us

- This is the second in a series of articles on data privacy Rahul Matthan is partner, Trilegal The views expressed are personal (Inner Voice comprises contributi­ons from our readers The views expressed are personal) innervoice@hindustant­imes.com

Every time an incident like Cambridge Analytica occurs, we feel compelled to regulate the hell out of the industry responsibl­e. We believe none of this would have happened if we had the right laws in place and that now that everything’s gone pear-shaped, feel should double down and plug the loopholes that allowed the incident to occur.

But what exactly is it we need to regulate? Modern tech companies are focused on taking the friction out of our daily lives. They do this by knowing us really well — understand­ing who we are and what we like — and not by employing armies of psychologi­sts, but by developing really smart algorithms that are capable of sorting us into categories that optimally define us. This knowledge of who we are helps improve our experience of their service as it allows them to provide us features and services customised to our likes and preference­s. This is what makes us keep us coming back to them — this user experience that magically gives us what we want without us asking for it.

Personalis­ation is at the core of almost every business today. Everyone wants to understand us better so they provide us what we want in a personally differenti­ated way. We like the fact that their algorithms do the heavy lifting for us, serving up recommenda­tions of what we like without us having to work to find it.

There are other benefits to personalis­ation. As we understand our bodies better we have realised that customised treatment that caters to the ailments of our individual phenotype and personal microbiome is far more effective — and has potentiall­y fewer side effects — than the broad spectrum treatment methodolog­y that we’ve followed for centuries. Precision medicine is fast becoming a reality and as we couple our fast improving knowledge of our genome with technologi­es.

The point here is that if we intentiona­lly or unintentio­nally fuel insecurity and lack of confidence in an elderly person about the use of new technologi­es we must remember that soon we’ll too reach that stage ourselves. While handling new technology comes easily to us because of our constant use, the elderly have to start from scratch. Consequent­ly, they find it tough and intimidati­ng, often struggling with the basics.

It’s the same when in our pre-primary days they would teach us the letters of the our ability to monitor personal parameters using wearable smart devices, it will not be long before our medication­s will be titrated daily to our individual requiremen­ts.

But there is a flip side to personalis­ation. As we allow the services we use to know more about us, others that we do not will as well. The informatio­n we give up in order to improve our experience can, just as easily, be weaponised against us, placing us in filter bubbles where the informatio­n we receive is limited and our access to necessary counterpoi­nts to our staunchly held beliefs are censored. It will render us unusually vulnerable to social engineerin­g — leaving us open to identity theft. This dilemma is central to the regulation of data driven personalis­ation. If we allow it, we run the risk that nefarious elements will use this to harm us — financiall­y, reputation­ally and intellectu­ally. If we ban it altogether or prevent businesses from learning about us, we deprive ourselves of benefits that come from personalis­ation, and that is rarely in our interest.

Which is why we tend to regulate retrospect­ively, responding to the effects of the algorithm only after they are plainly visible to us. We design our regulation­s to prevent what just happened from happening again because regulators are ill-equipped to predict how new technologi­es will harm us until that harm is capable of being observed.

There is one entity that is capable of assessing the harm that an algorithm can potentiall­y cause in advance of it actually occurring — the company that deploys it. If we can shift our regulatory focus to incentivis­e companies to take a broader view of the algorithms they design, forcing them to focus beyond the commercial benefits, but in addition to look to ensure that their algorithms do not accidental­ly expose users to harm, we will be able to more effectivel­y regulate the data economy without depriving ourselves of the benefits it has to offer.

This approach will leave technology companies free to innovate, while setting out broadly articulate­d boundaries they should not cross. If designed well — with strong and effective punitive consequenc­es — tech companies will be compelled, in their own interest, to look beyond narrow commercial gains and design their algorithms and business processes so that they cause no harm.

Where technology evolves rapidly and has ramificati­ons that can span continents, no regulator will ever be able to prevent harm from happening. We should not expect them to. Instead, we need to design our laws to ensure that those who actually can prevent these consequenc­es will.

IF WE CAN INCENTIVIS­E COMPANIES TO ENSURE THAT THEIR ALGORITHMS DO NOT ACCIDENTAL­LY EXPOSE USERS TO HARM, WE WILL BE ABLE TO MORE EFFECTIVEL­Y REGULATE THE DATA ECONOMY

alphabet. Did they ever shout at us because we couldn’t frame sentences? Now, when the roles are reversed, we should also treat them with the same love and patience they showed us in our childhood.

Remember, holding a smartphone in your hand does not make you ‘smart’. Being able to give back to someone, who patiently taught you everything in life, does.

 ??  ??

Newspapers in English

Newspapers from India