Ama­zon ditched AI re­cruit­ing tool that fa­vored men for tech­ni­cal jobs

The Guardian Australia - - Technology -

Ama­zon’s ma­chine-learn­ing spe­cial­ists un­cov­ered a big prob­lem: their new re­cruit­ing en­gine did not like women.

The team had been build­ing com­puter pro­grams since 2014 to re­view job ap­pli­cants’ ré­sumés, with the aim of mech­a­niz­ing the search for top tal­ent, five peo­ple fa­mil­iar with the ef­fort told Reuters.

Au­to­ma­tion has been key to Ama­zon’s e-com­merce dom­i­nance, be it in­side ware­houses or driv­ing pric­ing de­ci­sions. The com­pany’s ex­per­i­men­tal hir­ing tool used ar­ti­fi­cial in­tel­li­gence to give job can­di­dates scores rang­ing from one to five stars – much as shop­pers rate prod­ucts on Ama­zon, some of the peo­ple said.

“Ev­ery­one wanted this holy grail,” one of the peo­ple said. “They literally wanted it to be an en­gine where I’m go­ing to give you 100 ré­sumés, it will spit out the top five, and we’ll hire those.”

But by 2015, the com­pany re­al­ized its new sys­tem was not rat­ing can­di­dates for soft­ware de­vel­oper jobs and other tech­ni­cal posts in a gen­der-neu­tral way.

That is be­cause Ama­zon’s com­puter mod­els were trained to vet ap­pli­cants by ob­serv­ing pat­terns in ré­sumés sub­mit­ted to the com­pany over a 10-year pe­riod. Most came from men, a re­flec­tion of male dom­i­nance across the tech in­dus­try.

In ef­fect, Ama­zon’s sys­tem taught it­self that male can­di­dates were prefer­able. It pe­nal­ized ré­sumés that in­cluded the word “women’s”, as in “women’s chess club cap­tain”. And it down­graded grad­u­ates of two all­women’s col­leges, ac­cord­ing to peo­ple fa­mil­iar with the mat­ter.

Ama­zon edited the pro­grams to make them neu­tral to these par­tic­u­lar terms. But that was no guar­an­tee that

the ma­chines would not de­vise other ways of sort­ing can­di­dates that could prove dis­crim­i­na­tory, the peo­ple said.

The Seat­tle com­pany ul­ti­mately dis­banded the team by the start of last year be­cause ex­ec­u­tives lost hope for the project, ac­cord­ing to the peo­ple, who spoke on con­di­tion of anonymity. Ama­zon’s re­cruiters looked at the rec­om­men­da­tions gen­er­ated by the tool when search­ing for new hires, but never re­lied solely on those rank­ings, they said.

Ama­zon de­clined to com­ment on the re­cruit­ing en­gine or its chal­lenges, but the com­pany says it is com­mit­ted to work­place di­ver­sity and equal­ity.

The com­pany’s ex­per­i­ment, which Reuters is first to re­port, of­fers a case study in the lim­i­ta­tions of ma­chine learn­ing. It also serves as a les­son to the grow­ing list of large com­pa­nies in­clud­ing Hilton World­wide Hold­ings and Gold­man Sachs that are look­ing to au­to­mate por­tions of the hir­ing process.

Some 55% of US hu­man re­sources man­agers said ar­ti­fi­cial in­tel­li­gence, or AI, would be a reg­u­lar part of their work within the next five years, ac­cord­ing to a 2017 sur­vey by tal­ent soft­ware firm Ca­reerBuilder.

Mas­cu­line lan­guage

Ama­zon’s ex­per­i­ment be­gan at a piv­otal mo­ment for the world’s largest on­line re­tailer. Ma­chine learn­ing was gain­ing trac­tion in the tech­nol­ogy world, thanks to a surge in low­cost com­put­ing power. And Ama­zon’s Hu­man Re­sources de­part­ment was about to em­bark on a hir­ing spree; since June 2015, the com­pany’s global head­count has more than tripled to 575,700 work­ers, reg­u­la­tory fil­ings show.

So it set up a team in Ama­zon’s Ed­in­burgh en­gi­neer­ing hub that grew to around a dozen peo­ple. Their goal was to de­velop AI that could rapidly crawl the web and spot can­di­dates worth re­cruit­ing, the peo­ple fa­mil­iar with the mat­ter said.

The group cre­ated 500 com­puter mod­els fo­cused on spe­cific job func­tions and lo­ca­tions. They taught each to rec­og­nize some 50,000 terms that were found on past can­di­dates’ ré­sumés. The al­go­rithms learned to as­sign lit­tle sig­nif­i­cance to skills that were com­mon across IT ap­pli­cants, such as the abil­ity to write var­i­ous com­puter codes, the peo­ple said.

In­stead, the tech­nol­ogy fa­vored can­di­dates who de­scribed them­selves us­ing verbs more com­monly found on male en­gi­neers’ re­sumes, such as “ex­e­cuted” and “cap­tured”, one per­son said.

Gen­der bias was not the only is­sue. Prob­lems with the data that un­der­pinned the mod­els’ judg­ments meant that un­qual­i­fied can­di­dates were of­ten rec­om­mended for all man­ner of jobs, the peo­ple said. With the tech­nol­ogy re­turn­ing re­sults al­most at ran­dom, Ama­zon shut down the project, they said.

The prob­lem or the cure?

Other com­pa­nies are forg­ing ahead, un­der­scor­ing the ea­ger­ness of em­ploy­ers to har­ness AI for hir­ing.

Kevin Parker, chief ex­ec­u­tive of HireVue, a startup near Salt Lake City, said au­to­ma­tion is help­ing com­pa­nies look be­yond the same re­cruit­ing net­works upon which they have long re­lied. His firm an­a­lyzes can­di­dates’ speech and fa­cial ex­pres­sions in video in­ter­views to re­duce reliance on ré­sumés.

“You weren’t go­ing back to the same old places; you weren’t go­ing back to just Ivy League schools,” Parker said. His com­pany’s cus­tomers in­clude Unilever PLC and Hilton.

Gold­man Sachs has cre­ated its own ré­sumé anal­y­sis tool that tries to match can­di­dates with the di­vi­sion where they would be the “best fit”, the com­pany said.

LinkedIn, the world’s largest pro­fes­sional net­work, has gone fur­ther. It of­fers em­ploy­ers al­go­rith­mic rank­ings of can­di­dates based on their fit for job post­ings on its site.

Still, John Jersin, vice-pres­i­dent of LinkedIn Tal­ent So­lu­tions, said the ser­vice is not a re­place­ment for tra­di­tional re­cruiters.

“I cer­tainly would not trust any AI sys­tem to­day to make a hir­ing de­ci­sion on its own,” he said. “The tech­nol­ogy is just not ready yet.”

Some ac­tivists say they are con­cerned about trans­parency in AI. The Amer­i­can Civil Lib­er­ties Union is cur­rently chal­leng­ing a law that al­lows crim­i­nal prose­cu­tion of re­searchers and jour­nal­ists who test hir­ing web­sites’ al­go­rithms for dis­crim­i­na­tion.

“We are in­creas­ingly fo­cus­ing on al­go­rith­mic fair­ness as an is­sue,” said Rachel Good­man, a staff at­tor­ney with the Racial Jus­tice Pro­gram at the ACLU. Still, Good­man and other crit­ics of AI ac­knowl­edged it could be ex­ceed­ingly dif­fi­cult to sue an em­ployer over au­to­mated hir­ing; job can­di­dates might never know it was be­ing used.

As for Ama­zon, the com­pany man­aged to sal­vage some of what it learned from its failed AI ex­per­i­ment. It now uses a “much wa­tered-down ver­sion” of the re­cruit­ing en­gine to help with some rudi­men­tary chores, in­clud­ing culling du­pli­cate can­di­date pro­files from data­bases, one of the peo­ple fa­mil­iar with the project said.

An­other said a new team in Ed­in­burgh has been formed to give au­to­mated em­ploy­ment screen­ing an­other try, this time with a fo­cus on di­ver­sity.

Ama­zon’s au­to­mated hir­ing tool was found to be in­ad­e­quate af­ter pe­nal­iz­ing the ré­sumés of fe­male can­di­dates. Pho­to­graph: Brian Sny­der/Reuters

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.