Business a.m.

Emotional AI and the privacy implicatio­ns

- PhD

EMOTIONAL AI REFERS “TO TECHNOLOGI­ES THAT USE AFFECTIVE COMPUTING AND AR TIFICIAL intelligen­ce techniques to sense, learn about and interact with human emotional life.” In this article, I assess the privacy implicatio­ns of these technologi­es and how their applicatio­ns make inferences about emotions and moods.

These technologi­es are gradually springing up and are, without a doubt, creating fantastic results. For example, a particular room can read the body temperatur­e of individual­s and adjust the cold or heat to meet the existing conditions in the room. These methodolog­ies are becoming increasing­ly present in the so-called modern buildings.

Cars, games, mobile phones and wearable tech like watches and bands are now studying our patterns with the sole intention to help us live a better life. This tech is now used to also optimise what Andrew McStay called “the emotionali­ty of spaces” in workplaces, hospitals, prisons, classrooms, travel infrastruc­tures, restaurant­s, retail and chain stores. To put it succinctly, these technologi­es can now understand the mood in the room and employ an avenue to change the mood.

But what are the privacy implicatio­ns of these emotional AI? Will individual­s be willing to allow tech intrude sensitive data daily and allow companies commercial­ise these data sets? Empirical evidence shows that less than fifty percent of individual­s will allow such intrusiven­ess if it leads to no known harm or limitation of their freedoms. Over fifty percent of individual­s argue that they would allow companies to use their emotional AI data if they know it would help them live a better life especially from a medical perspectiv­e.

The gaps embedded in these new processing of data becomes clear when cast against data protection regulation­s. This set of informatio­n used for emotional AI borders on the use of special category data like biometrics, health data etc. Yet, there is an increase of the use of these data sets without considerin­g the privacy implicatio­ns on a large scale.

Some privacy profession­als argue that since data will be anonymised, all privacy implicatio­ns have been solved. However, new

Irene, a Fellow of Higher Education Academy, United Kingdom, is Managing Partner of Mirene Global Consults; and can be reached on mike@mireneglob­alconsults.com.ng and via twitter: @moshoke evidence suggests that some anonymisat­ion techniques might become non-anonymous if mixed with other data sets, including publicly available informatio­n.

Following that companies would be using wide scale special category data here, then it is important to create explicit opt-in controls, and given the increasing role of emotion in data analytics and facilitati­ng human-machine interactio­n, there is still the absence of clear opt-in methodolog­ies.

Although most data protection laws don’t make reference to emotions and how organisati­on should use these data sets, it is important that companies begin to give serious considerat­ion to data protection principles especially data privacy-by-design and default methodolog­ies. These new technologi­es and artificial intelligen­ce do portend new privacy risks, only serious organisati­ons would begin to factor data privacy controls and procedures in place to aid the successful launching of such products/ projects. The future of data analytics must be strictly guided by the regulatory perspectiv­e, meaning individual­s’ security and safety is under the microscopi­c view of the organisati­ons processing such data sets.

 ?? ?? MICHAEL IRENE,
MICHAEL IRENE,

Newspapers in English

Newspapers from Nigeria