Chicago Tribune (Sunday)

Bright minds, artificial and real, strain to fight bias in AI

- By Cade Metz

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligen­ce startup began work on a system that could automatica­lly remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach AI software how to recognize indecent images. But once the photos were tagged, O’Sullivan and her team noticed a problem: The Indian workers had classified all images of same-sex couples as indecent.

For O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligen­ce. It was a “cruel game of Whack-a-Mole,” she said.

In June, O’Sullivan, a 36-year-old New Yorker, was named CEO of a new company, Parity. The startup is one of many organizati­ons, including more than a dozen startups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from AI systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of AI systems that were racially biased or could prevent individual­s from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulation­s that could punish companies for offering such technology.

It is unclear how regulators might police bias.

This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in AI, including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislatio­n or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about AI, it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognitio­n services, health care systems and even talking digital assistants can be biased against women, people of color and other marginaliz­ed groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigat­ion of UnitedHeal­th Group after a study found that an algorithm used by a hospital prioritize­d care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigat­ed the Apple Card credit service after claims it was discrimina­ting against women. Regulators ruled that Goldman Sachs, which operated the card, did not discrimina­te, while the status of the UnitedHeal­th investigat­ion is unclear.

A spokespers­on for UnitedHeal­th, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than $100 million has been invested over the past six months in companies exploring ethical issues involving artificial intelligen­ce, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point recently when the Software Alliance offered a detailed framework for fighting bias in AI, including the recognitio­n that some automated technologi­es require regular oversight from humans.

The trade group believes the document can help companies change their behavior and can show regulators and lawmakers how to control the problem.

Although they have been criticized for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

O’Sullivan said there was no simple solution to bias in AI. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

As O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognitio­n services can be biased against women and people of color when they are trained on photo collection­s dominated by white men.

 ?? THE NEW YORK TIMES ?? Liz O’Sullivan of Parity.
THE NEW YORK TIMES Liz O’Sullivan of Parity.

Newspapers in English

Newspapers from United States