Hiding in the algorithm: the battle to root out prejudice
Margi Murphy examines why automated decisions have failed to remove bias from the financial sector
On a hot weekend last June, Sarah Jane Carsten walked to the Hertz airport kiosk, having flown home for a friend’s wedding. Without a credit card, Carsten’s information was run through an automated system to see if she was a risk. Despite her good credit score and healthy bank balance, the 26-year-old lawyer was denied the prepaid Sedan.
An expensive Uber journey later, she returned the next morning and asked the manager what had happened. The conversation became tense and he held a computer printout, which stated that it was “unable to supply specific reasons why we have denied your request to pay by debit card”, and asked her “if I could read”, Carsten says.
She pointed to the ticket and asked him to show precisely where it explained the denial, accidentally knocking over a bottle of water on the counter, when the manager “took several steps back from the counter as if he was afraid”.
Carsten, frustrated, suggested that he was “afraid of black people”. After that, the manager told her he would be refusing her any services and called the airport police.
“My first reaction was pure embarrassment, and the second was anger. It felt as if the whole system was a scam,” she says. Since then, Carsten has tried in vain to find out what information the automated system used to decide she was a risk: was it because she was a woman, black, or simply a glitch?
It is this lack of transparency that led the Government to appoint researchers to understand ingrained algorithmic bias across the financial services sector.
The report, which is due to be published by the Centre for Ethics in Data Innovation in April, has been delayed because of the coronavirus pandemic.
British consumers have been exposed to automated decisionmaking for years. On paper, it made sense to hand over decisions about how likely someone is to reoffend, excel in a certain role or repay a loan in a timely fashion to machines, eliminating from the equation the subconscious prejudices held by humans.
But in the years since, the very historical biases we were trying to fight are creeping in.
“Something needs to be done,” says Paul Resnick, a professor at the University of Michigan School of Information. “There needs to be a regime of transparency.”
Scientists have warned for years that algebra may not discriminate but the data that it uses can be biased. For example, if a bank has a history of lending to white men and their data set is trained on those records, an algorithm is more likely to lend to that group. Algorithms are accurate, but they are not always fair.
“When you interact with systems that are making automated choices about you on your behalf, they make mistakes,” says Prof Resnick.
They make fewer mistakes, on average, than a human, but the sheer rate at which they are made means the volume of mistakes is higher, and they do not get corrected, nor prompt retraining, like a human might.
“It may not be clear whether it’s a random mistake, or whether it’s something that is being unfair to you based on a characteristic,” Prof Resnick adds.
“I think that’s one of the things that makes it really frustrating. Was I turned down for this loan because I’m black? Sometimes you were, sometimes you weren’t.”
The UK’s anti-discrimination laws offer protection from discrimination, whether human or algorithmic.
However, academics and officials now fear that the banks themselves may never know the scale of the problem because systems have already been trained on incomplete or unrepresentative data. Further, they may consist of a hodgepodge of third party software fuelled by disparate data brokers.
“Of course bias is there, it is just incredibly difficult to perceive,” says Genie Barton, who sits on the research board of the International Association
‘Was I turned down for this loan because I’m black? Sometimes you were, sometimes you weren’t’
of Privacy Professionals, based in New Hampshire.
Last year, a man and a woman of equal financial standing were given completely different credit limits when applying for the new Apple Card. The man, David Hansson, a software engineer, described the card as a “sexist program” and the New York state opened an investigation. How did he find out? The pair were married. In fact, despite Hansson receiving a limit 20 times higher than his wife, she actually had a better credit score.
The Financial Ombudsman receives complaints based on algorithmic decisions, which can be viewed online, some of which reveal the complications of the systems used by the biggest high street names.
Ironically, British banks feel that new European privacy laws have made it more difficult to eliminate bias because they are barred from collecting data on race, gender or disability to test whether their systems are fair.
Britain has become a pioneer in sandboxing, a programme run by the Financial Conduct Authority that helps start-ups or large companies such as Barclays play with synthetic data and run hypothetical models, which can be audited for issues such as bias. However, the programme is a voluntary one.
Lord Clement-Jones, the Liberal Democrat peer who chairs the Lords artificial intelligence committee, says he is frustrated at the lack of response from the industry despite encouraging programmes. Now, more than ever, he says, finance needs to prove it is working on deep-rooted issues that keep coming back to haunt us.
“Are we simply repeating the prejudices of the Seventies? That was exactly the question we asked in the House of Lords two years ago,” he says. “And it has still not been answered.”
Computer systems may not discriminate, but the data that they use can be biased, scientists have warned