Ama­zon scraps se­cret AI tool that showed bias against women

Business World - - Labor & Management -

SAN FRAN­CISCO — Ama­zon.com Inc’s ma­chine­learn­ing spe­cial­ists un­cov­ered a big prob­lem: their new re­cruit­ing en­gine did not like women.

The team had been build­ing com­puter pro­grams since 2014 to re­view job ap­pli­cants’ re­sumes with the aim of mech­a­niz­ing the search for top tal­ent, five peo­ple fa­mil­iar with the ef­fort told Reuters.

Au­to­ma­tion has been key to Ama­zon’s e-com­merce dom­i­nance, be it in­side ware­houses or driv­ing pric­ing de­ci­sions. The com­pany’s ex­per­i­men­tal hir­ing tool used ar­ti­fi­cial in­tel­li­gence to give job can­di­dates scores rang­ing from one to five stars - much like shop­pers rate prod­ucts on Ama­zon, some of the peo­ple said.

“Everyone wanted this holy grail,” one of the peo­ple said. “They lit­er­ally wanted it to be an en­gine where I’m go­ing to give you 100 re­sumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the com­pany re­al­ized its new sys­tem was not rat­ing can­di­dates for soft­ware developer jobs and other tech­ni­cal posts in a gen­der-neu­tral way.

That is be­cause Ama­zon’s com­puter mod­els were trained to vet ap­pli­cants by ob­serv­ing pat­terns in re­sumes sub­mit­ted to the com­pany over a 10-year pe­riod. Most came from men, a re­flec­tion of male dom­i­nance across the tech in­dus­try.

In ef­fect, Ama­zon’s sys­tem taught it­self that male can­di­dates were prefer­able. It pe­nal­ized re­sumes that in­cluded the word “women’s,” as in “women’s chess club cap­tain.” And it down­graded grad­u­ates of two all-women’s col­leges, ac­cord­ing to peo­ple fa­mil­iar with the mat­ter. They did not spec­ify the names of the schools.

Ama­zon edited the pro­grams to make them neu­tral to these par­tic­u­lar terms. But that was no guar­an­tee that the ma­chines would not de­vise other ways of sort­ing can­di­dates that could prove dis­crim­i­na­tory, the peo­ple said.

The Seat­tle com­pany ul­ti­mately dis­banded the team by the start of last year be­cause ex­ec­u­tives lost hope for the project, ac­cord­ing to the peo­ple, who spoke on con­di­tion of anonymity. Ama­zon’s re­cruiters looked at the rec­om­men­da­tions gen­er­ated by the tool when search­ing for new hires, but never re­lied solely on those rank­ings, they said.

Ama­zon de­clined to com­ment on the tech­nol­ogy’s chal­lenges, but said the tool “was never used by Ama­zon re­cruiters to eval­u­ate can­di­dates.” The com­pany did not elab­o­rate fur­ther. It did not dis­pute that re­cruiters looked at the rec­om­men­da­tions gen­er­ated by the re­cruit­ing en­gine.

The com­pany’s ex­per­i­ment, which Reuters is first to re­port, of­fers a case study in the lim­i­ta­tions of ma­chine learn­ing. It also serves as a les­son to the grow­ing list of large com­pa­nies in­clud­ing Hil­ton World­wide Hold­ings Inc and Gold­man Sachs Group Inc that are look­ing to au­to­mate por­tions of the hir­ing process.

Some 55% of US hu­man re­sources man­agers said Ar­ti­fi­cial In­tel­li­gence, or AI, would be a reg­u­lar part of their work within the next five years, ac­cord­ing to a 2017 sur­vey by tal­ent soft­ware firm Ca­reerBuilder.—

Newspapers in English

Newspapers from Philippines

© PressReader. All rights reserved.