Bangkok Post

CURB YOUR ALGORITHM

Using AI to find bias in AI

- CADE METZ NYT COMPANY © 2021 THE NEW YORK TIMES

In 2018, Liz O’Sullivan and her colleagues at a prominent artificial intelligen­ce start-up began work on a system that could automatica­lly remove nudity and other explicit images from the internet. They sent millions of online photos to workers in India, who spent weeks adding tags to explicit material. The data paired with the photos would be used to teach AI software how to recognise indecent images. But once the photos were tagged, O’Sullivan and her team noticed a problem — the Indian workers had classified all images of same-sex couples as indecent.

For O’Sullivan, the moment showed how easily — and often — bias could creep into artificial intelligen­ce. It was a “cruel game of Whac-a-Mole”, she said.

This month, O’Sullivan, a 36-year-old New Yorker, was named CEO of a new company, Parity. The start-up is one of many organisati­ons, including more than a dozen start-ups and some of the biggest names in tech, offering tools and services designed to identify and remove bias from AI systems.

Soon, businesses may need that help. In April, the US Federal Trade Commission warned against the sale of AI systems that were racially biased or could prevent individual­s from receiving employment, housing, insurance or other benefits. A week later, the European Union unveiled draft regulation­s that could punish companies for offering such technology.

It is unclear how regulators might police bias. This past week, the National Institute of Standards and Technology, a government research lab whose work often informs policy, released a proposal detailing how businesses can fight bias in AI, including changes in the way technology is conceived and built.

Many in the tech industry believe businesses must start preparing for a crackdown.

“Some sort of legislatio­n or regulation is inevitable,” said Christian Troncoso, senior director of legal policy for the Software Alliance, a trade group that represents some of the biggest and oldest software companies. “Every time there is one of these terrible stories about AI, it chips away at public trust and faith.”

Over the past several years, studies have shown that facial recognitio­n services, healthcare systems and even talking digital assistants can be biased against women, people of colour and other marginalis­ed groups. Amid a growing chorus of complaints over the issue, some local regulators have already taken action.

In late 2019, state regulators in New York opened an investigat­ion of UnitedHeal­th Group after a study found that an algorithm used by a hospital prioritise­d care for white patients over black patients, even when the white patients were healthier. Last year, the state investigat­ed the Apple Card credit service after claims it was discrimina­ting against women. Regulators ruled that Goldman Sachs, which operated the card, did not discrimina­te, while the status of the UnitedHeal­th investigat­ion is unclear.

A spokespers­on for UnitedHeal­th, Tyler Mason, said the company’s algorithm had been misused by one of its partners and was not racially biased. Apple declined to comment.

More than US$100 million (3.22 billion baht) has been invested over the past six months in companies exploring ethical issues involving artificial intelligen­ce, after $186 million last year, according to PitchBook, a research firm that tracks financial activity.

But efforts to address the problem reached a tipping point recently when the Software Alliance offered a detailed framework for fighting bias in AI, including the recognitio­n that some automated technologi­es require regular oversight from humans. The trade group believes the document can help companies change their behaviour and can show regulators and lawmakers how to control the problem.

Although they have been criticised for bias in their own systems, Amazon, IBM, Google and Microsoft also offer tools for fighting it.

O’Sullivan said there was no simple solution to bias in AI. A thornier issue is that some in the industry question whether the problem is as widespread or as harmful as she believes it is.

“Changing mentalitie­s does not happen overnight — and that is even more true when you’re talking about large companies,” she said. “You are trying to change not just one person’s mind but many minds.”

When she started advising businesses on AI bias more than two years ago, O’Sullivan was often met with scepticism. Many executives and engineers espoused what they called “fairness through unawarenes­s”, arguing that the best way to build equitable technology was to ignore issues like race and gender.

Increasing­ly, companies were building systems that learned tasks by analysing vast amounts of data, including photos, sounds, text and stats. The belief was that if a system learned from as much data as possible, fairness would follow.

But as O’Sullivan saw after the tagging done in India, bias can creep into a system when designers choose the wrong data or sort through it in the wrong way. Studies show that face-recognitio­n services can be biased against women and people of colour when they are trained on photo collection­s dominated by white men.

Designers can be blind to these problems. The workers in India — where gay relationsh­ips were still illegal at the time and where attitudes towards gays and lesbians were very different from those in the United States — were classifyin­g the photos as they saw fit.

O’Sullivan saw the flaws and pitfalls of artificial intelligen­ce while working for Clarifai, the company that ran the tagging project. She said she had left the company after realising it was building systems for the military that she believed could eventually be used to kill. Clarifai did not respond to a request for comment.

She now believes that after years of public complaints over bias in AI — not to mention the threat of regulation — attitudes are changing. In its new framework for curbing harmful bias, the Software Alliance warned against fairness through unawarenes­s, saying the argument did not hold up.

“They are acknowledg­ing that you need to turn over the rocks and see what is underneath,” O’Sullivan said.

Still, there is resistance. She said a recent clash at Google, where two ethics researcher­s were pushed out, was indicative of the situation at many companies. Efforts to fight bias often clash with corporate culture and the unceasing push to build new technology, get it out the door and start making money.

It is also still difficult to know just how serious the problem is.

“We have very little data needed to model the broader societal safety issues with these systems, including bias,” said Jack Clark, one of the authors of the AI Index, an effort to track AI technology and policy across the globe. “Many of the things that the average person cares about — such as fairness — are not yet being measured in a discipline­d or a large-scale way.”

O’Sullivan, a philosophy major in college and a member of the American Civil Liberties Union, is building her company around a tool designed by Rumman Chowdhury, a well-known AI ethics researcher who spent years at the business consultanc­y Accenture before joining Twitter.

While other start-ups, like Fiddler AI and Weights and Biases, offer tools for monitoring AI services and identifyin­g potentiall­y biased behaviour, Parity’s technology aims to analyse the data, technologi­es and methods a business uses to build its services and then pinpoint areas of risk and suggest changes.

The tool uses artificial intelligen­ce technology that can be biased in its own right, showing the double-edged nature of AI — and the difficulty of O’Sullivan’s task.

Tools that can identify bias in AI are imperfect, just as AI is imperfect. But the power of such a tool, she said, is to pinpoint potential problems — to get people looking closely at the issue.

Ultimately, she explained, the goal is to create a wider dialogue among people with a broad range of views. The trouble comes when the problem is ignored — or when those discussing the issues carry the same point of view.

“You need diverse perspectiv­es. But can you get truly diverse perspectiv­es at one company?” O’Sullivan asked. “It is a very important question I am not sure I can answer.”

 ??  ??
 ??  ?? CEO Liz O’Sullivan of the start-up Parity.
CEO Liz O’Sullivan of the start-up Parity.

Newspapers in English

Newspapers from Thailand