Miami Herald (Sunday)

Algorithms can perpetuate bias. Update laws to fight this insidious form of discrimina­tion

- BY VINHCENT LE Progressiv­e Media Project Vinhcent Le is technology equity legal counsel at The Greenlinin­g Institute. ©2021 Tribune Content Agency

For some of us, the word “algorithm” is fairly new to our vocabulary. But badly designed decisionma­king algorithms have a growing impact on our lives and can do a great deal of damage.

An algorithm is a set of instructio­ns used by computer systems to perform a task or make a decision. On social-media platforms, for example, algorithms decide what ads appear based on what content a user looks at, likes or shares.

As we discovered in a new Greenlinin­g Institute report on algorithmi­c bias, these algorithms may be used to decide everything from whether someone gets a job interview or mortgage, to how heavily one’s neighborho­od is policed.

“Poorly designed algorithms,” we wrote, “threaten to amplify systemic racism by reproducin­g patterns of discrimina­tion and bias that are found in the data algorithms use to learn and make decisions.”

Algorithms can be put to good use, such as helping manage responses to the COVID-19 pandemic, but things also can go seriously wrong. Sometimes, algorithms replicate the conscious or unconsciou­s biases of the humans who designed them, disadvanta­ging whole groups of people, often without them even knowing it’s happening.

Like humans, algorithms “learn” — in the latter case through what’s called “training data,” which teaches the algorithm to look for patterns in bits of informatio­n.

That’s where things can start to go wrong.

Consider a bank whose historical lending data shows that it routinely gave higher interest rates to people in a ZIP code with a majority of Black residents. An algorithm trained on that biased data could learn to overcharge residents in that area.

In 2014, Amazon tried to develop a recruiting algorithm to rate the resumes of job candidates and predict who would do well. Even though gender was not intended as a factor in the algorithm, it still favored men and penalized resumes that included the names of all-women’s colleges. This likely happened because Amazon had a poor record of hiring and promoting women, causing the training data used for the algorithm to repeat the pattern.

Amazon’s researcher­s caught the problem and, when they found they couldn’t fix it, scrapped the algorithm. But how many such situations have gone unnoticed and uncorrecte­d? No one knows.

Worse, our laws have not caught up with this new, treacherou­s form of discrimina­tion. While both federal and state government­s have anti-discrimina­tion laws, they’re ineffectiv­e in this situation, since most were written before the internet was invented. And proving algorithmi­c bias is difficult since the people being discrimina­ted against may not know why or how the decision that harmed them was made.

Anti-discrimina­tion laws must be updated to properly regulate algorithmi­c bias and discrimina­tion. California’s legislatur­e is leading the way by considerin­g legislatio­n that would bring more transparen­cy and accountabi­lity to algorithms used in government programs.

Government at all levels should pay much more attention to this insidious form of discrimina­tion.

 ?? Getty Images ?? Algorithms programmed into computer systems can reflect the same biases of the humans who created them.
Getty Images Algorithms programmed into computer systems can reflect the same biases of the humans who created them.
 ??  ??

Newspapers in English

Newspapers from United States