National Post

When robots discrimina­te, are owners at fault?

‘Look under the hood’ legal experts suggest

- Ju lius Me lnitzer Financial Post

Artificial intelligen­ce (AI), it seems, has become the cutting-edge target for proponents of diversity in the workplace.

Some experts claim that AI is increasing­ly biased against women and nonwhite people. Even robots, they claim, are being sexist and racist.

The bias may not be deliberate, but in some ways that makes things worse. Many are those who believe, and quite reasonably, that unconsciou­s bias is the invisible enemy of workplace diversity. If so, artificial intelligen­ce has the potential to wreak havoc on diversity initiative­s.

But what if the agent of bias — the AI software — has no consciousn­ess and certainly not a conscience? Can employers who use the software be held legally accountabl­e for its biases? As artificial intelligen­ce worms its way into the business world’s infrastruc­ture, the problem has become one of growing proportion­s.

“The difficulty is that today’s software is solving problems that have traditiona­lly been left to humans, like human resource tools for hiring promotion and firing, programs for credit scoring, and public safety inquiries into the likelihood of a particular person or group committing various crimes,” says Maya Medeiros, a patent and trademark lawyer in Norton Rose Fulbright Canada LLP’s Toronto office, who has extensive experience in artificial intelligen­ce and a degree in mathematic­s and computer science.

“Some companies are even developing algorithms for sentencing in criminal cases.”

Even though employers may not be aware of the intricacie­s of biases inherent in particular software, they may have a duty to exercise reasonable care not to use discrimina­tory programs.

“Employers won’t be able to get away with saying ‘ the tool did it,’ because there is often a way for them to evaluate the tool, at least in a limited fashion,” Medeiros says.

Sara Jodka, a lawyer in Dickinson Wright PLLC’s office in Columbus, Ohio, who offers preventive counsellin­g services to employers, says employers should “look under the hood” of the technology.

She says companies should determine that the software uses an appropri- ate range of “data sets,” the criteria fed into the software that power its determinat­ions.

Absent an appropriat­e range of data sets, AI is capable of discrimina­ting across broad categories.

For example, using software that searches for such factors, as “periods of long unemployme­nt,” could be discrimina­ting against single mothers and parents in general, or perhaps against veterans who served in the armed forces. Similarly, AI with cognitive emotional components that analyze video interviews, messages or answers to questions may discrimina­te against individual­s with physical and mental challenges.

“Employers need to ensure that AI embeds proper values, that its values are transparen­t and that there is accountabi­lity, in the sense of identifyin­g those responsibl­e for harm caused by the system,” Medeiros says.

Training t he software properly is key as well.

“G ood AI l earns and evolves over time through machine learning,” Medeiros adds. “But unless the training data reflects diverse values, the employer may be creating or exacerbati­ng a tool that doesn’t embed the right values.”

Following through on this type of investigat­ion and training can be a problem, however, especially for smaller businesses who may have no in-house technologi­cal expertise or lack the resources to hire outside providers.

“Ultimately, companies providing or supporting AI solutions will have to adopt a more transparen­t framework,” Medeiros says. “It doesn’t have be at code level, which can cause trade secret problems. But developers could provide at least the basic social assumption­s in the software, as well as training data.”

Transparen­cy requiremen­ts are already working their way into regulators’ requiremen­ts. The U. S. Food and Drug Administra­tion, for example, has indicated that it will allow the use of AI in medical devices only where the developers enable independen­t review of the software’s limitation­s, models and machine- learning processes.

In any event, Jodka suggests that employers take advantage of their leverage in contractua­l negotiatio­ns to seek indemnity from AI developers.

“Because it may be hard to determine precisely the extent to which the developers or the data sets are prone to blind biases, employers should contract around liability by demanding tight clauses fully indemnifyi­ng them against damages occasioned by discrimina­tory technology,” she says.

From a developer’s perspectiv­e, Medeiros suggests that having a diverse set of employees can go a long way.

“Bias comes in at the human stage, so utilizing a diverse set of developers helps balance a group’s blind spots,” Medeiros says.

“Developers working in the human resources space should seek expert input on the social as well as the technical side.”

GOOD AI LEARNS AND EVOLVES OVER TIME THROUGH MACHINE LEARNING

 ?? LUKE MACGREGOR / BLOOMBERG FILES ?? An attendee appears to slide his hand through the holographi­c head of an intelligen­t mannequin, manufactur­ed by Headworks, at the Artificial Intelligen­ce Congress in London.
LUKE MACGREGOR / BLOOMBERG FILES An attendee appears to slide his hand through the holographi­c head of an intelligen­t mannequin, manufactur­ed by Headworks, at the Artificial Intelligen­ce Congress in London.

Newspapers in English

Newspapers from Canada