Using artificial intelligence to find bias in AI
Startup among firms offering services to help rout out problems
In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligence startup began work on a system that could automatically remove nudity and other explicit images from the internet.
They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach AI software how to recognize indecent images. But once the photos were tagged, O’Sullivan and her team noticed a problem: The Indian workers had classified all images of samesex couples as indecent.
For O’Sullivan, the moment showed how easily and often bias could creep into artificial intelligence. It was a “cruel game of Whac-a-Mole,” she said.
This month, O’Sullivan, a 36-year-old New Yorker, was named CEO of a new company, Parity. The startup is one of many organizations, including more than a dozen startups and
some of the biggest names in tech, offering tools and services designed to identify and remove bias from AI systems.
Soon, businesses may need that help. In April, the Federal Trade Commission warned against the sale of AI systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits. A week later, the European Union
unveiled draft regulations that could punish companies for offering such technology.
It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in AI, including changes in the way technology is conceived and built.
Many in the tech industry believe businesses must start preparing for a crackdown. “Some sort of legislation or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about AI, it chips away at public trust and faith.”
Over the past several years, studies have shown that facial recognition services, health care systems and even talking digital assistants can be biased against women, people of color and other marginalized groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.
In late 2019, state regulators in New York opened an investigation of UnitedHealth Group after a study found that an algorithm used by a hospital prioritized care for white patients over Black patients, even when the white patients were healthier. Last year, the state investigated the Apple Card credit service after claims it was discriminating against women. Regulators ruled that Goldman Sachs, which operated the card, did not discrimi