RE­CRUIT­MENT POINT

The HR Digest - - Content Features -

Can We Rely on Al­go­rith­mic Hir­ing?

Let’s agree on one point: hu­mans are bi­ased de­ci­sion mak­ers. Here’s a ma­jes­tic ex­am­ple of this: hir­ing man­agers have a ten­dency to hire can­di­dates who re­mind them of them­selves, re­sult­ing in fur­ther ho­mo­gene­ity in the work­place. In the tech sec­tor, this ho­mo­gene­ity has been es­pe­cially in­cred­i­ble: Google’s first di­ver­sity re­port, re­leased a year ago, re­ported only 2 per­cent of its staff are black, and 3 per­cent His­panic. Face­book an­nounced it’s go­ing to try the NFL’S “Rooney Rule”which re­quires that NFL teams in­ter­view mi­nor­ity can­di­dates for coach­ing po­si­tions, in or­der to be more di­verse.

One pro­posed so­lu­tion is to try to re­move some of those pre­dis­po­si­tions with a sys­tem­atic anal­y­sis of data, i.e. rely on al­go­rith­mic hir­ing. Com­pa­nies em­ploy per­son­al­ity tests dur­ing screen­ing, then use data anal­y­sis to de­ter­mine its ideal hires. Gen­er­ally, the al­go­rithm de­pends on what a com­pany is look­ing for, some com­mon vari­ables in­clude us­ing the data from iden­tity tests to pre­dict whether a can­di­date will quit or steal on the job.

Al­go­rith­mic hir­ing has be­come the norm, as of late. Google used an al­go­rithm to staff up rapidly, us­ing a de­tailed over­view to zone in on can­di­dates who will fit into the work­place cul­ture. One study of al­go­rith­mic hir­ing found that a sim­ply equa­tion was es­sen­tially su­pe­rior than hu­mans at iden­ti­fy­ing high-per­form­ing work­ers. The re­sult held across var­i­ous in­dus­tries and lev­els of em­ploy­ment, and the re­searchers at­trib­uted the out­come to hu­mans giv­ing care­ful con­sid­er­a­tion to unim­por­tant de­tails and us­ing data about can­di­dates con­flict­ingly.

Presently, one com­pany is re­port­ing that al­go­rith­mic hir­ing can en­hance di­ver­sity. In­for Tal­ent Science gives soft­ware that helps com­pa­nies hire us­ing be­hav­ioral data, and then make a pre­dic­tive model based on top per­form­ers. They then hire can­di­dates based on how they match up to those top per­form­ers. One par­tic­u­lar com­pany used data of 50,000 hires for their clients and found an av­er­age in­crease of 26 per­cent in African Amer­i­cans and His­pan­ics across a range of in­dus­tries and jobs.

Re­gard­less of the in­dus­try, whether it’s a call cen­ter, restau­rant, or re­tail, al­go­rith­mic hir­ing in­creases the di­ver­sity of the work­force. In an In­for re­port it was found that a whole­sale client was able to ex­pand His­panic con­tracts by 31 per­cent. A food joint was able to re­cruit African Amer­i­can con­tracts by 60 per­cent.

One of the ad­mo­ni­tions of In­for’s study is that their data is only based on hires who dis­closed eth­nic back­ground. Sim­i­larly as with most over­views, check­ing the racial box is will­ful. Ac­cu­mu­lat­ing racial data has for some time been tricky as can­di­dates reg­u­larly worry that it will re­sult into racial dis­cr­mi­na­tion. (The Cen­sus Bureau too suf­fers from this is­sue, and it is ex­plor­ing dif­fer­ent av­enues to col­lect­ing data about race and ori­gin.) But it’s not clear that, mi­nor­ity can­di­dates are un­der­counted: Oth­ers may be­lieve that dis­clos­ing race will pull in di­ver­sity-minded em­ploy­ers.

So will hir­ing al­go­rithms elim­i­nate the bi­ased hir­ing process? Re­searchers warn that big data’s ob­jec­tiv­ity can also cover other bi­ases built into the al­go­rithm. Chelsea Barabas, a re­searcher at MIT’S Cen­ter for Civic Me­dia, says -

De­ci­sions tak­ing into ac­count al­go­rithms, are get­ting to be used for ev­ery­thing from pre­dict­ing work­place con­duct to deny­ing op­por­tu­nity in a way that can mask par­tial­i­ties while keep­ing up a patina of sci­en­tific ob­jec­tiv­ity. These are con­cerns by dif­fer­ent re­searchers, for ex­am­ple, Kate Craw­ford, who has made ar­gu­ments against the case that big data doesn’t op­press so­cial groups.

There’s a lot of re­search on the rea­sons that di­ver­sity is use­ful for the work cul­ture: It in­creases ef­fi­ciency; it in­creases crit­i­cal think­ing; it’s even been shown to in­crease sales and gen­er­ate more rev­enue. The ques­tion of whether work di­ver­sity is good ap­pears to have been an­swered. But, how do we ac­com­plish such di­ver­sity?

These re­sults may ap­pear to show that al­go­rith­mic hir­ing can de­crease bi­ases, how­ever an or­ga­ni­za­tion needs to think about do­ing as such. In­for’s re­sults are in­cred­i­ble, how­ever, there are very few com­pa­nies in­ter­ested in in­creas­ing work­place di­ver­sity.

Hir­ing man­agers are great at spec­i­fy­ing what’s re­quired for a po­si­tion and in­spir­ing data from hope­fuls-yet they’re ter­ri­ble at mea­sur­ing the out­comes. An anal­y­sis of nu­mer­ous stud­ies of can­di­date as­sess­ments shows that even the sim­plest equa­tions beats hu­man hir­ing de­ci­sions by no less than 25%. The re­sults are the same even with a large pool of can­di­dates, re­gard­less of the job func­tion they’re get­ting in­ter­viewed for.

In a study con­ducted by HBR, Brian S. Con­nelly, of the Univer­sity of Toronto, it was found that peo­ple mak­ing the call were more fa­mil­iar with the com­pany and had more in­for­ma­tion about the ap­pli­cants than in­cluded in the equa­tion. The is­sue is the peo­ple are eas­ily dis­tracted by things that might only be marginally rel­e­vant, and they of­ten use this in­for­ma­tion in­con­sis­tently.

Stud­ies rec­om­mend that while eval­u­at­ing peo­ple, 85% to 97% of hir­ing ex­perts de­pend to some de­gree on in­stinct. Hir­ing man­agers be­lieve they can set­tle on the best choice by con­tem­plat­ing a can­di­date’s en­ve­lope and look­ing into his or her eyes-no al­go­rithm nec­es­sary. They would ar­gue that an al­go­rithm can­not sub­sti­tute a vet­eran’s vast knowl­edge.

If not ap­pro­pri­ate to leave the de­ci­sion mak­ing process up to the ma­chines al­to­gether. Com­pa­nies must use an al­go­rith­mic hir­ing process in or­der to nar­row the field be­fore call­ing on hu­man judg­ment to pick a few fi­nal­ists for the job. Even bet­ter if we have sev­eral man­agers weight on the fi­nal hir­ing de­ci­sion. This way, you can boost the ben­e­fits of­fered by the al­go­rithm and cater to the man­agers’ needs to ex­er­cise their hir­ing power.

Soft­wares like Doxa, match can­di­dates with tech com­pa­nies and even par­tic­u­lar groups and di­rec­tors based on val­ues, skills, and com­pat­i­bil­ity, like whether a team has more col­lab­o­ra­tion, or is there a team where a women’s op­tions are taken more se­ri­ously?

In this way, Doxa has re­vealed parts of work­ing at com­pa­nies that are never made pub­lic to can­di­dates. This data, from anony­mous em­ployee sur­veys, also in­cludes time keep­ing, weekly work­ing hours, and which de­part­ments have the big­gest gen­der pay gaps.

An­other soft­ware, Tex­tio, uses ma­chine learn­ing and lan­guage anal­y­sis to break down job post­ings for com­pa­nies like Star­bucks and Bar­clays. Tex­tio re­vealed more than 25,000 phrases that show gen­der bias. Lan­guage like “top-tier” and “ag­gres­sive” and sports or mil­i­tary analo­gies like “mis­sion crit­i­cal” di­min­ish­ing the pro­por­tion of women who ap­ply for work. Lan­guage like “pas­sion for learn­ing” and “col­lab­o­ra­tion” pull in more women.

We can­not get too over­con­fi­dent re­ly­ing on data, hu­man ex­per­tise is still nec­es­sary while mak­ing hir­ing de­ci­sions.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.