The Trentonian (Trenton, NJ)

U.S. civil rights enforcers warn employers against biased AI

- By Matt O’brien

The federal government said Thursday that artificial intelligen­ce technology to screen new job candidates or monitor worker productivi­ty can unfairly discrimina­te against people with disabiliti­es, sending a warning to employers that the commonly used hiring tools could violate civil rights laws.

The U.S. Justice Department and the Equal Employment Opportunit­y Commission jointly issued guidance to employers to take care before using popular algorithmi­c tools meant to streamline the work of evaluating employees and job prospects — but which could also potentiall­y run afoul of the Americans with Disabiliti­es Act.

“We are sounding an alarm regarding the dangers tied to blind reliance on AI and other technologi­es that we are seeing increasing­ly used by employers,” Assistant Attorney General Kristen Clarke of the department’s Civil Rights Division told reporters Thursday. “The use of AI is compoundin­g the longstandi­ng discrimina­tion that jobseekers with disabiliti­es face.”

Among the examples given of popular work-related AI tools were resume scanners, employee monitoring software that ranks workers based on keystrokes, game-like online tests to assess job skills and video interviewi­ng software that measures a person’s speech patterns or facial expression­s.

Such technology could potentiall­y screen out people with speech impediment­s, severe arthritis that slows typing or a range of other physical or mental impairment­s, the officials said.

Tools built to automatica­lly analyze workplace behavior can also overlook onthe-job accommodat­ions — such as a quiet workstatio­n for someone with posttrauma­tic stress disorder or more frequent breaks for a pregnancy-related disability — that enable employees to modify their work conditions to perform their jobs successful­ly.

Experts have long warned that AI-based recruitmen­t tools — while often pitched as a way of eliminatin­g human bias — can actually entrench bias if they’re taking cues from industries where racial and gender disparitie­s are already prevalent.

The move to crack down on the harms they can bring to people with disabiliti­es reflects a broader push by President Joe Biden’s administra­tion to foster positive advancemen­ts in AI technology while reining in opaque and largely unregulate­d AI tools that are being used to make important decisions about people’s lives.

“We totally recognize that there’s enormous potential to streamline things,” said Charlotte Burrows, chair of the EEOC, which is responsibl­e for enforcing laws against workplace discrimina­tion. “But we cannot let these tools become a hightech path to discrimina­tion.”

A scholar who has researched bias in AI hiring tools said holding employers accountabl­e for the tools they use is a “great first step,” but added that more work is needed to rein in the vendors that make these tools. Doing so would likely be a job for another agency, such as the Federal Trade Commission, said Ifeoma Ajunwa, a University of North Carolina law professor and founding director of its AI DecisionMa­king Research Program.

“There is now a recognitio­n of how these tools, which are usually deployed as an anti-bias interventi­on, might actually result in more bias — while also obfuscatin­g it,” Ajunwa said.

A Utah company that runs one of the best-known AIbased hiring tools — video interviewi­ng service HireVue — said Thursday that it welcomes the new effort to educate workers, employers and vendors and highlighte­d its own work in studying how autistic applicants perform on its skills assessment­s.

“We agree with the EEOC and DOJ that employers should have accommodat­ions for candidates with disabiliti­es, including the ability to request an alternate path by which to be assessed,” said the statement from HireVue CEO Anthony Reynold.

Newspapers in English

Newspapers from United States