The Mercury News

Using artificial intelligen­ce to find bias in AI

Startup among firms offering services to help rout out problems

- By Cade Metz

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligen­ce startup began work on a system that could automatica­lly remove nudity and other explicit images from the internet.

They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach AI software how to recognize indecent images. But once the photos were tagged, O’Sullivan and her team noticed a problem: The Indian workers had classified all images of samesex couples as indecent.

For O’Sullivan, the moment showed how easily and often bias could creep into artificial intelligen­ce. It was a “cruel game of Whac-a-Mole,” she said.

This month, O’Sullivan, a 36-year-old New Yorker, was named CEO of a new company, Parity. The startup is one of many organizati­ons, including more than a dozen startups and

some of the biggest names in tech, offering tools and services designed to identify and remove bias from AI systems.

Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of AI systems that were racially biased or could prevent individual­s from receiving employment, housing, insurance or other benefits. A week later, the European Union

unveiled draft regulation­s that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in AI, including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislatio­n or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about AI, it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognitio­n services, health care systems and even talking digital assistants can be biased against women, people of color and other marginaliz­ed groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigat­ion of UnitedHeal­th Group after a study found that an algorithm used by a hospital prioritize­d care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigat­ed the Apple Card credit service after claims it was discrimina­ting against women. Regulators ruled that Goldman Sachs, which operated the card, did not discrimi

 ?? NATHAN BAJAR — THE NEW YORK TIMES ARCHIVES ?? Liz O’Sullivan, chief executive of the start-up Parity, said it had been a challenge to persuade some to be more concerned about bias.
NATHAN BAJAR — THE NEW YORK TIMES ARCHIVES Liz O’Sullivan, chief executive of the start-up Parity, said it had been a challenge to persuade some to be more concerned about bias.

Newspapers in English

Newspapers from United States