Business Standard

QUANTUM LEAP

- DEVANGSHU DATTA

The Internet and cellphone services were launched in India in the mid 1990s. The enabling legislatio­n was the Indian Telegraph Act of 1885, which continued to govern the sector, until the Informatio­n Technology Act was legislated in 2000. The old law was absolutely silent on multiple issues peculiar to cyberspace and mobile telephony. (The new law was also inadequate.)

This is an extreme example. But one of the problems with new technology is that it can render existing law obsolete. The opposite situation rarely occurs — legislatio­n is rarely so futuristic in its approach that it forces technology to find new solutions.

The European Union’s proposed General Data Protection Regulation (GDPR) may actually trigger such an unusual situation. The GDPR will be enforced in EU member states from May 2018 and some sections of the law give citizens a right to demand explanatio­ns about decisions made by algorithms.

The EU already has strong data protection laws, which have undergone further strengthen­ing in the new legislatio­n. Existing data protection laws give EU citizens the right to know about, and access, data collected and held about them by government­s and private corporatio­ns. It even gives them the “right to be forgotten” in search results in certain cases.

Articles 13 and 14 of the new legislatio­n deal with “a data subject having the right to meaningful informatio­n about the logic involved in algorithmi­c decisions that affect them”. Article 22 of the new law pertains to “Automated individual decisionma­king, including profiling” and it bans decisions “based solely on automated processing, including profiling, which produces an adverse legal effect concerning the data subject, or significan­tly affects him or her.”

Taken together, this could mean a seachange in the way algorithms are designed and presented. In effect, Article 22 implies that a human element would have to be present in the chain if algorithms are run to make key decisions that affect individual­s. More importantl­y, the decisions would have to be explained in natural language that made sense to humans.

Machine learning (more or less a synonym for artificial intelligen­ce, or AI) is often used to make routine decisions in personal finance. When somebody applies for a loan, or a credit card, the yes/no decision is often made by a machine. The credit limit, interest rates, tenure, and other details are also likely to be automated. Similarly, intelligen­t agents suggest portfolio allocation­s for savings.

Law-enforcemen­t decisions such as profiling for “no-fly” lists or additional security checks on air passengers are often made by running algorithms. AI is also increasing­ly effective at esoteric tasks like reading body language and even at figuring out sexual orientatio­n.

It may seem easy enough to introduce a human being into the chain to sign off on some decisions. But even the data scientists who write the machine-learning programs might find it impossible to explain the machine’s decisions.

Machine learning works by setting up programs that consider multiple variables and then feeding huge quantities of data. The program sifts the data, sets its own rules and finds patterns and correlatio­ns. It is in effect, a black box: Mr X is given a credit limit of so-much; Ms Y gets a different limit; Mr Z is refused a credit card. Nobody bar the machine knows why and it would not be easy or even at all possible for a human being to figure out the machine’s logic.

We don’t know if the machine is transparen­t and fair, or if it is basing decisions on racial or religious factors. For example, somebody who lives in a low-income, minority-dominated area may be refused credit by an AI. Is it because it’s a lowincome area, or because that person belongs to a minority?

One classic case study that’s often mentioned in machine learning involved examinatio­n of pneumonia cases. A program discovered that asthmatics with pneumonia had a higher recovery rate than general category patients. The reason is that asthmatics get immediate emergency care while general patients tend to be treated more casually.

In August, a behavioura­l expert and data scientist at Stanford’s Graduate School of Business published a study where he claimed that a facial-recognitio­n program he had developed could identify the sexual orientatio­n of people with very high accuracy by looking at profile pictures pulled off social media. Human beings have a strike rate of about 60 per cent – little better than random guesses – at identifyin­g sexual orientatio­n by viewing faces. The program was correct 91 per cent of the time with men, and about 83 per cent accurate with women.

Another AI programme, “Silent Talker” claims to work as a “lie-detector” by looking at microscopi­c changes in facial expression when people are answering questions. Similarly, body-language recognitio­n programs can identify people with their faces obscured even at a distance. In all these cases, the programmer­s don’t really know what the machine is picking up.

The new GDPR will force computer scientists to direct research into this area. That could lead to much greater insights into machine learning and a better understand­ing of what biases algorithms.

Newspapers in English

Newspapers from India