Ama­zon ditches AI recruitment tool that ‘learnt to be sex­ist’

The Daily Telegraph - Business - - Technology Intelligence - By James Cook

AMA­ZON has scrapped a “sex­ist” in­ter­nal tool that used ar­ti­fi­cial in­tel­li­gence to sort through job ap­pli­ca­tions.

The pro­gram was cre­ated by a team at Ama­zon’s Ed­in­burgh of­fice in 2014 as a way to sort through CVs and pick out the most promis­ing can­di­dates.

How­ever, it taught it­self to pre­fer male can­di­dates over fe­male ones, mem­bers of the team told Reuters.

They no­ticed that it was pe­nal­is­ing CVs that in­cluded the word “women’s”, such as “women’s chess club cap­tain”. It also re­port­edly down­graded grad­u­ates of two all-women’s col­leges.

The prob­lem stemmed from the fact that the sys­tem was trained on data sub­mit­ted by peo­ple over a 10-year pe­riod, most of which came from men.

The AI was tweaked in an at­tempt to fix the bias. How­ever, last year, Ama­zon lost faith in its abil­ity to be neu­tral and aban­doned the project. Ama­zon re­cruiters are be­lieved to have used the sys­tem to look at the rec­om­men­da­tions when hir­ing, but did not rely on the rank­ings. Cur­rently, women make up 40pc of Ama­zon’s work­force.

Ste­vie Buckley, the co-founder of UK job web­site Hon­est Work, which is used by com­pa­nies such as Snapchat to re­cruit for tech­nol­ogy roles, said: “The ba­sic premise of ex­pect­ing a ma­chine to iden­tify strong job ap­pli­cants based on his­toric hir­ing prac­tices at your com­pany is a sure-fire method to rapidly scale in­her­ent bias and dis­crim­i­na­tory recruitment prac­tices.”

The dan­ger of in­her­ent bias in the use of al­go­rithms is a com­mon prob­lem in the tech­nol­ogy in­dus­try. Al­go­rithms are not told to be bi­ased, but can be­come un­fair through the data they use.

Jes­sica Rose, a tech­ni­cal man­ager at ed­u­ca­tion start-up Fu­tureLearn and tech­nol­ogy speaker, said: “The value of AI as it’s used in recruitment is limited by hu­man bias. De­vel­op­ers and AI spe­cial­ists carry the same bi­ases as tal­ent pro­fes­sion­als, but we’re of­ten not asked to in­ter­ro­gate or test for these dur­ing the de­vel­op­ment process.”

Last month, IBM launched a tool that is de­signed to de­tect bias in AI. The Fair­ness 360 Kit al­lows de­vel­op­ers to see clearly how their al­go­rithms work and which pieces of data are used.

“Con­sid­er­ing Ama­zon’s ex­haus­tive re­sources and ta­lented team of en­gi­neers,” Mr Buckley said, “the fact their AI re­cruit­ing tool failed mis­er­ably sug­gests we should main­tain a de­fault scep­ti­cism to­wards any or­gan­i­sa­tion that claims to have pro­duced an ef­fec­tive AI tool for recruitment.”

Ama­zon de­clined to com­ment.

‘The value of ar­ti­fi­cial in­tel­li­gence as it is used in recruitment is limited by hu­man bias’

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.