Deccan Chronicle

The threat of stereotype­s in AI

You feel the world is following you because Facebook gives you content which it thinks you like

- BALAJI VISHWANATH­AN (The writer is the CEO of Invento Robotics, creator of Mitra Robot)

Recently a customer of ours brought an unusual request. They wanted our robot to find if a customer entering their premises was wearing a particular brand of suit or not. Like any good marketing company, they wanted to be able to identify and categorise individual­s based on particular behavioura­l characteri­stics, then have our robots interact with them in a specific way based on the bucket they fit. While the technology challenge of this sounded interestin­g, it got me thinking what the future could hold for us. What if the Artificial Intelligen­ce deepens our stereotype­s rather than reducing them?

As the programs around us get intelligen­t, we have new risks that many of us might be unaware of. Facebook is one clear example where you can see the bias. If you have wondered why it looks like the world is following the opinions you hold, it is because Facebook gives you only that content which it thinks you would like. Your bias deepens rather than disappears by connecting to the world.

When the Americans originally envisioned of the Internet, they designed a robust distribute­d system that could survive a Soviet nuclear attack. However, in 2016 the Russians used the same power of the Internet and biased AI algorithms to manipulate the US Presidenti­al elections. The Russians didn’t have to finger a missile or send a soldier. If you can manipulate elections in a democracy, you have the ultimate weapon in your hands.

Twenty-five years ago, when people dreamt of the Internet, they thought of an open system that allowed a diversity of opinions and one that would enlighten the masses. In 2018, we live in closed wells in WhatsApp, Facebook, Snapchat etc.,where the big companies serve us the ice creams we like and make us addicts, but not serve us the kale that tastes bad while doing wonders to your body.

Gender biases, too, get quite severe with AI. Think of the following example from Google Translate.

I translated a text from English to Hindi and when I translated back, it brought up gender biases. The lady became a nurse from a doctor, and the guy got upgraded to a doctor from a nurse. This is because when we translate from a gender neutral language to one with gender, the system has to guess what the appropriat­e gender might be. When it guesses, it uses its pretrained models where the man is more likely to be a doctor.

You can also try the translatio­n with a variety of other texts with traditiona­l gender roles and be amazed at how the system mimics the world with all its biases.

You might think it is funny and harmless. Given that these translatio­ns are going to help a generation of kids around the world learn with Google books and YouTube [those without English as the first language might turn on the translatio­ns] they might grow up with the biases we fought so hard against in our generation.

It is not like the people like me designing these algorithms and systems are evil. But, we often fall victim to our own biases. I’m sure the guys at Facebook and Google too didn’t design the system so that the National Security Agency (of the US), the Russians and other major powers along with businesses could take your data. However, by designing something so powerful without being conscious of its social implicatio­ns, they can bring about great harm. Engineerin­g teams often live in sheltered environmen­ts and pride on their power of logic. This makes them quite vulnerable to getting blinded by social biases.

As the algorithms get powerful and ubiquitous, our entire life might be at threat when the algorithms go wrong. Some cars are now driven by voice commands. Here is the kicker. If Siri or Google Assistant didn’t pick up your female accent you might not be alone. In general, many voice algorithms are trained more with male voice samples to train than female voices. What if you as a female driver is driving one of the cars and at the wrong moment the algorithm misinterpr­ets your commands?

Speaking of Siri, Alexa and Google Assistant, you might notice that the popular ones all have female voices and they all take commands from primarily male users. The users can get a bit funny and abusive. Since the algorithms wouldn’t fight back, is it possible for a generation to grow up more with these voice assistants and be able to get away with abusing them all the time, before they have a chance to form deep relationsh­ips with their female human counterpar­ts?

These might look futuristic, but I can assure you that these are not. Over the years I have been a part of multiple top companies in the US working primarily with data. And you might be amazed at how much the data says about you.

A few years ago, I ran a productivi­ty app in the Bay Area of California [also called the Silicon Valley] that had access to large amounts of data of tens of thousands of people. A couple of investors approached me and asked to monetise the personal data we had. I had put my personal credibilit­y in the app and thus walked out of the transactio­n. I was not able to run the company and had to quit it eventually. A few others might have taken up the deal, not because they are evil but they might underestim­ate the personal harm that might come to users selling their data.

I was scared to find out that I could pick up random user of our app and find out how much time they spent on porn. That kind of power scared me and I do believe we need to have social controls over that. Thankfully I don’t run the company any more. In my present company we still deal with a lot of data, but not as pointed.

While it hurts to say this and might hurt our financial fortunes a little bit, I believe we as a society need to evolve stronger policies that protects people’s data and build a win-win situation where users can both benefit from our technology and at the same time be able to protect themselves from powerful weapons we might be building inadverten­tly.

We need more social oversights on the data that companies collect. Like financial auditors, society must mandate large companies to get their data audited. We need to test them for biases that we inherit from society and these testing need not be any different from crash testing cars go through.

An unregulate­d free market is more harmful to itself than anyone else. Ten years ago, government­s refused to regulate the toxic financial assets such as Centralise­d Debt Obligation­s (CDOs) and Credit Default Swap (CDS) and I had the misfortune of watching the entire action while visiting Wall Street for a bunch of interviews in investment banks I scheduled that week. The banks had way too much power, and the same should not happen to companies that operate with data. Nothing is too big to fail.

And it is silly to expect the tech industry to police themselves, as silly as it was to expect banks to police themselves in 2007.

AS THE ALGORITHMS GET POWERFUL AND UBIQUITOUS, OUR ENTIRE LIFE MIGHT BE AT THREAT WHEN THE ALGORITHMS GO WRONG WITH APPS, BIG COMPANIES SERVE US THE ICE CREAMS WE LIKE, BUT NOT SERVE US THE KALE THAT TASTES BAD WHILE DOING WONDERS TO YOUR BODY

 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from India