Ama­zon scraps ‘anti-women’ AI re­cruit­ing en­gine

Shanghai Daily - - BUSINESS - TECH­NOL­OGY (Reuters)

AMA­ZON’S ma­chine-learn­ing spe­cial­ists un­cov­ered a big prob­lem: their new re­cruit­ing en­gine did not like women, sev­eral peo­ple fa­mil­iar with the project say.

The team had been build­ing com­puter pro­grams since 2014 to re­view job ap­pli­cants’ re­sumes with the aim of mech­a­niz­ing the search for top tal­ent.

Au­to­ma­tion has been key to Ama­zon’s e-com­merce dom­i­nance, be it in­side ware­houses or driv­ing pric­ing de­ci­sions.

The com­pany’s ex­per­i­men­tal hir­ing tool used ar­ti­fi­cial in­tel­li­gence to give job can­di­dates scores rang­ing from one to five stars — much like shop­pers rate prod­ucts on Ama­zon.

“Ev­ery­one wanted this holy grail,” one per­son fa­mil­iar with the project said. “They lit­er­ally wanted it to be an en­gine where I’m go­ing to give you 100 re­sumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the com­pany re­al­ized its new sys­tem was not rat­ing can­di­dates for soft­ware de­vel­oper jobs and other tech­ni­cal posts in a gen­der-neu­tral way.

That is be­cause Ama­zon’s com­puter mod­els were trained to vet ap­pli­cants by ob­serv­ing pat­terns in re­sumes sub­mit­ted to the com­pany over a 10-year pe­riod. Most came from men, a re­flec­tion of male dom­i­nance across the tech in­dus­try.

In ef­fect, Ama­zon’s sys­tem taught it­self that male can­di­dates were prefer­able. It pe­nal­ized re­sumes that in­cluded the word “women’s,” as in “women’s chess club cap­tain.”

And it down­graded grad­u­ates of two all-women’s col­leges, ac­cord­ing to peo­ple fa­mil­iar with the mat­ter.

They did not spec­ify the names of the schools.

Ama­zon edited the pro­grams to make them neu­tral to th­ese par­tic­u­lar terms.

But that was no guar­an­tee that the ma­chines would not de­vise other ways of sort­ing can­di­dates that could prove dis­crim­i­na­tory.

The Seat­tle com­pany ul­ti­mately dis­banded the team by the start of last year be­cause ex­ec­u­tives lost hope for the project.

Ama­zon’s re­cruiters looked at the rec­om­men­da­tions gen­er­ated by the tool when search­ing for new hires, but never re­lied solely on those rank­ings, said sev­eral peo­ple fa­mil­iar with the op­er­a­tion.

Ama­zon de­clined to com­ment on the re­cruit­ing en­gine or its chal­lenges.

But the com­pany says it is com­mit­ted to work­place di­ver­sity and equal­ity.

The com­pany’s ex­per­i­ment of­fers a case study in the lim­i­ta­tions of ma­chine learn­ing.

It also serves as a les­son to the grow­ing list of large com­pa­nies in­clud­ing Hilton World­wide Hold­ings Inc and Gold­man Sachs Group Inc that are look­ing to au­to­mate por­tions of the hir­ing process.

Some 55 per­cent of US hu­man re­sources man­agers said ar­ti­fi­cial in­tel­li­gence, or AI, would be a reg­u­lar part of their work within the next five years, ac­cord­ing to a 2017 sur­vey by tal­ent soft­ware firm Ca­reerBuilder.

Em­ploy­ers have long dreamed of har­ness­ing tech­nol­ogy to widen the hir­ing net and re­duce re­liance on sub­jec­tive opin­ions of hu­man re­cruiters. But com­puter sci­en­tists such as Ni­har Shah, who teaches ma­chine learn­ing at Carnegie Mellon Univer­sity, say there is still much work to do.

“How to en­sure that the al­go­rithm is fair, how to make sure the al­go­rithm is re­ally in­ter­pretable and ex­plain­able — that’s still quite far off,” he said.

Newspapers in English

Newspapers from China

© PressReader. All rights reserved.