Machines lie
IF you’re looking for a female CEO, don’t try and find one in Google’s image search.
Punch in CEO and watch the white males rise to the top as they do in real life.
When I did this yesterday, the first eight places were men and the first woman was NSW Premier, Gladys Berejiklian.
The next woman was a Getty images stock photo used on dozens of websites.
This wouldn’t surprise more than 8000 devotees of artificial intelligence who spent days gnawing their white knuckles at Long Beach, California, earlier this month.
Keynote speaker Prof Kate Crawford put it bluntly: “If our systems keep producing biased results, if people are unfairly kept in jail, or they can’t get insurance, or receive incorrect medical treatment, then people will no longer trust these tools.”
She then projected the Google “CEO” search results for the US on a big screen and pointed out the first female exec.
“Can you kinda guess who it would be?” teased Crawford. “She’s right down there on the end.
“It’s CEO Barbie. Seriously. Not a great look.”
Is it bias? Yes, and no. If only 8 per cent of global CEOs are women, then you might say the results reflect the current state of society.
But if machine learning algorithms continue to make decisions based on statistical norms and not our shared goals, then expect unfairness to be embedded in our society like never before.
Crawford cites research where women have been recommended lower paying jobs by employment programs using smarter data matching technology.
Just like Google’s image search, statistical bias has slithered into AI-assisted decision-making at critical moments in a person’s life. Buying a car. Getting a job. Dealing with bureacracy. We all have weighted numbers rs against our identities.
At the heart of the problem is s the way a machine learns how to o classify us.
Unlike a human child, when a machine first learns anything about us, it works from a collection of training data, not from wise teachers.
You might expect that data to o be accurate and objective. It’s not.
Historical data captures life as we know it up until now, not life as we want it to be.
If you don’t like the way a machine is determining housing g loans, then you might want to tweak the code with some moral l guidance. But whose moral guidance? Humans rarely agree on anything, so getting them to agree on how to tune the moral compass of an artificial intelligence will be a bridge too far.
“This is actually a really hard decision,” Crawford says. “It is not a straightforward question and it has a lot of political implications.”
Take, for example, the Stanford University study earlier this year that could accurately distinguish between gay and heterosexual people using facial recognition technology.
“Gay men had narrower jaws and longer noses, while lesbians had larger jaws,” the researchers said with more than 80 per cent accuracy for men. The study caused an uproar. For Crawford, this is an issue for ethics in classification because homosexuality is still criminalised in 78 countries, “some of which apply the death penalty”.
The possibilities for “extreme vetting” using technologies such as this are frightening.
If an AI could predict the future, I reckon it would be seriously worried about 2018.
It’s stacking up as a year of reckoning.
Expect crusaders from both the conservative and liberal movements to turn the scientific community into a political and moral battlefield.
It will be ten-fold more virulent than the global warming debate — because the harmful consequences are tangible and immediate.
Bad technology, clickbait research and political expediency will undermine public confidence in a field that promises so much.
And, I fear, science will have lost its faithful.