Texarkana Gazette

EVEN COMPUTER ALGORITHMS CAN BE BIASED

Scientists have different ideas of how to prevent that

- By Amina Khan ■ Los Angeles Times

Scientists say they’ve developed a framework to make computer algorithms “safer” to use without creating bias based on race, gender or other factors. The trick, they say, is to make it possible for users to tell the algorithm what kinds of pitfalls to avoid — without having to know a lot about statistics or artificial intelligen­ce.

With this safeguard in place, hospitals, companies and other potential users who may be wary of putting machine learning to use could find it a more palatable tool for helping them solve problems, according to a report in this week’s edition of the journal Science.

Computer algorithms are used to make decisions in a range of settings, from courtrooms to schools to online shopping sites. The programs sort through huge amounts of data in search of useful patterns that can be applied to future decisions.

But researcher­s have been wrestling with a problem that’s become increasing­ly difficult to ignore: Although the programs are automated, they often provide biased results.

For example, an algorithm used to determine prison sentences predicted higher recidivism rates for black defendants found guilty of crimes and a lower risk for white ones. Those prediction­s turned out to be wrong, according to a ProPublica analysis. Biases like this often originate in the real world. An algorithm used to determine which patients were eligible for a health care coordinati­on program was under-enrolling black patients largely because the code relied on realworld health spending data — and black patients had fewer dollars spent on them than whites did.

Even if the informatio­n itself is not biased, algorithms can still produce unfair or other “undesirabl­e outcomes,” said Philip Thomas, an artificial intelligen­ce researcher at the University of Massachuse­tts Amherst and lead author of the new study.

Sorting out which processes might be driving those unfair outcomes, and then fixing them, can be an overwhelmi­ng task for doctors, hospitals or other potential users who just want a tool that will help them make better decisions.

“They’re the experts in their field but perhaps not in machine learning — so we

shouldn’t expect them to have detailed knowledge of how algorithms work in order to control the behavior of the algorithms,” Thomas said. “We want to give them a simple interface to define undesirabl­e behavior for their applicatio­n and then ensure that the algorithm will avoid that behavior with high probabilit­y.”

So the computer scientists developed a different type of algorithm that allowed users to more easily define what bad behavior they wanted their program to avoid.

This, of course, makes the algorithm designers’ job more difficult, Thomas said, because they have to build their algorithm without knowing what biases or other problemati­c behaviors the eventual user won’t want in the program.

“Instead, they have to make the algorithm smart enough to understand what the user is saying is undesirabl­e behavior, and then reason entirely on its own about what would cause this behavior, and then avoid it with high probabilit­y,” he said. “That makes the algorithm a bit more complicate­d, but much easier for people to use responsibl­y.”

To test their new framework, the researcher­s tried it out on a dataset of entrance exam scores for 43,303 Brazilian students and the grade point averages they earned during their first three semesters at college.

Standard algorithms that tried to predict a student’s GPA based on his or her entrance exam scores were biased against women: The grades they predicted for women were lower than were actually the case, and the grades they predicted for men were higher. This caused an error gap between men and women that averaged 0.3 GPA points — enough to make a major difference in a student’s admissions prospects.

The new algorithm, on the other hand, shrank that error range to within 0.05 GPA points — making it a much fairer predictor of students’ success.

Even if the informatio­n itself is not biased, algorithms can still produce unfair or other “undesirabl­e outcomes,” said Philip Thomas, an artificial intelligen­ce

researcher.

 ?? Tribune News Service ?? ABOVE:
A student fills in his answer to the practice test question for a standardiz­ed test. Scientists say they’ve developed a framework to make computer algorithms “safer” to use without creating bias based on race, gender or other
factors.
Tribune News Service ABOVE: A student fills in his answer to the practice test question for a standardiz­ed test. Scientists say they’ve developed a framework to make computer algorithms “safer” to use without creating bias based on race, gender or other factors.

Newspapers in English

Newspapers from United States