From Bias ti Bet­ter De­ci­sions

Data can be a highly ef­fec­tive de­ci­sion-mak­ing tool. But it can also make us com­pla­cent. Lead­ers need to be aware of three com­mon pit­falls.

Rotman Management Magazine - - FRONT PAGE - By Me­gan Mac­garvie and Kristina Mcel­heran

can be an ef­fec­tive way to sort through com­plexDATA ANAL­Y­SIS ity and as­sist our judg­ment when it comes to mak­ing de­ci­sions. But even with im­pres­sively large data sets and the best an­a­lyt­ics tools, we are still vul­ner­a­ble to a range of de­ci­sion-mak­ing pit­falls — es­pe­cially when in­for­ma­tion over­load leads us to take short­cuts in rea­son­ing. As a re­sult, in some in­stances, data and an­a­lyt­ics ac­tu­ally make mat­ters worse.

Psy­chol­o­gists, be­havioural econ­o­mists and other schol­ars have iden­ti­fied sev­eral com­mon de­ci­sion-mak­ing traps, many of which stem from the fact that peo­ple don’t care­fully process ev­ery piece of in­for­ma­tion in ev­ery de­ci­sion. In­stead, we rely on heuris­tics — sim­pli­fied pro­ce­dures that al­low us to make de­ci­sions in the face of un­cer­tainty or when ex­ten­sive anal­y­sis is too costly or time-con­sum­ing. These men­tal short­cuts lead us to be­lieve that we are mak­ing sound de­ci­sions when in fact, we are mak­ing sys­tem­atic mis­takes. What’s more, hu­man brains are wired for cer­tain bi­ases that creep in and dis­tort our choices — typ­i­cally with­out our aware­ness.

There are three main cog­ni­tive traps that reg­u­larly bias de­ci­sion-mak­ing, even when in­formed by the best data. We will ex­am­ine each in de­tail and pro­vide sug­ges­tions for avoid­ing them.

TRAP #1: THE CON­FIR­MA­TION TRAP

When we pay more at­ten­tion to find­ings that align with our prior be­liefs, while ig­nor­ing other facts and pat­terns in the data, we fall into the con­fir­ma­tion trap. With a huge data set and nu­mer­ous cor­re­la­tions be­tween vari­ables, an­a­lyz­ing all pos­si­ble cor­re­la­tions is of­ten both costly and coun­ter­pro­duc­tive. Even with smaller data sets, it can be easy to in­ad­ver­tently fo­cus on cor­re­la­tions that con­firm our ex­pec­ta­tions of ‘how the world should work’ and dis­miss coun­ter­in­tu­itive or in­con­clu­sive pat­terns in the data when they don’t align.

Con­sider the fol­low­ing ex­am­ple: In the late 1960s and early 1970s, re­searchers con­ducted one of the most well-de­signed stud­ies on how dif­fer­ent types of fats af­fect heart health and

mor­tal­ity. But the re­sults of this study, known as the Min­nesota Coro­nary Ex­per­i­ment, were not pub­lished at the time — and a re­cent New York Times ar­ti­cle sug­gests that this might have been be­cause they con­tra­dicted the be­liefs of both re­searchers and the med­i­cal es­tab­lish­ment. In fact, it wasn’t un­til re­cently that the med­i­cal jour­nal BMJ pub­lished a piece ref­er­enc­ing this data, when grow­ing skep­ti­cism about the re­la­tion­ship be­tween sat­u­rated fat con­sump­tion and heart dis­ease led re­searchers to an­a­lyze data from the orig­i­nal ex­per­i­ment — more than 40 years later.

These and sim­i­lar find­ings cast doubt on decades of un­chal­lenged med­i­cal ad­vice to avoid sat­u­rated fats. While it is un­clear whether one ex­per­i­ment would have changed stan­dard di­etary and health rec­om­men­da­tions, this ex­am­ple demon­strates that even with the best pos­si­ble data, those look­ing at the num­bers can ig­nore im­por­tant facts when they con­tra­dict the dom­i­nant par­a­digm or don’t con­firm their be­liefs, with po­ten­tially trou­ble­some re­sults.

Con­fir­ma­tion bias be­comes that much harder to avoid when in­di­vid­u­als face pres­sure from bosses and peers. Or­ga­ni­za­tions fre­quently re­ward em­ploy­ees who can pro­vide em­pir­i­cal sup­port for ex­ist­ing man­age­rial pref­er­ences. Those who de­cide what parts of the data to ex­am­ine and present to se­nior man­agers may feel com­pelled to choose only the ev­i­dence that re­in­forces what their su­per­vi­sors want to see or that con­firms a preva­lent at­ti­tude within the firm.

OUR AD­VICE: To get a fair as­sess­ment of what the data has to say, don’t avoid in­for­ma­tion that coun­ters your (or your boss’s) be­liefs. In­stead, em­brace it by do­ing the fol­low­ing: Spec­ify in ad­vance the data and an­a­lyt­i­cal ap­proaches on which you will base your de­ci­sion, to re­duce the temp­ta­tion to ‘cherry-pick’ find­ings that agree with your prej­u­dices. Ac­tively seek out find­ings that dis­prove your be­liefs. Ask your­self, ‘If my ex­pec­ta­tions are wrong, what pat­tern would I likely see in the data?’ En­list a skep­tic to help you. Seek out peo­ple who like to play ‘devil’s ad­vo­cate’ or as­sign con­trary po­si­tions for ac­tive de­bate.

Don’t au­to­mat­i­cally dis­miss find­ings that fall be­low your thresh­old for sta­tis­ti­cal or prac­ti­cal sig­nif­i­cance. Both noisy re­la­tion­ships (i.e. those with large stan­dard er­rors) and small (i.e. pre­cisely mea­sured) re­la­tion­ships can point to flaws in your be­liefs and pre­sump­tions. Ask your­self, ‘What would it take for this to ap­pear im­por­tant?’ Make sure your key take­away is not sen­si­tive to rea­son­able changes in your model or sam­ple size.

As­sign mul­ti­ple in­de­pen­dent teams to an­a­lyze the data sep­a­rately. Do they come to sim­i­lar con­clu­sions? If not, iso­late and study the points of di­ver­gence to de­ter­mine whether the dif­fer­ences are due to er­ror, in­con­sis­tent meth­ods or bias.

Treat your find­ings like pre­dic­tions, and test them. If you un­cover a cor­re­la­tion from which you think your or­ga­ni­za­tion can profit, use an ex­per­i­ment to val­i­date that cor­re­la­tion.

TRAP #2: THE OVER­CON­FI­DENCE TRAP

In their book Judg­ment in Man­age­rial De­ci­sion Mak­ing, be­havioural re­searchers Max Baz­er­man and Don Moore re­fer to over­con­fi­dence as ‘the mother of all bi­ases’. Time and time again, psy­chol­o­gists have found that de­ci­sion-mak­ers are too sure of them­selves. We tend to as­sume that the ac­cu­racy of our judg­ments or the prob­a­bil­ity of suc­cess in our en­deav­ours is more favourable than the data would sug­gest.

When there are risks, we bias our read­ing of the odds to as­sume we’ll come out on the win­ning side. Se­nior de­ci­sion­mak­ers who have been pro­moted based on past suc­cesses are es­pe­cially sus­cep­ti­ble to this bias, since they have re­ceived pos­i­tive sig­nals about their de­ci­sion-mak­ing abil­i­ties through­out their ca­reers.

Over­con­fi­dence also re­in­forces many other pit­falls of data in­ter­pre­ta­tion: It can pre­vent us from ques­tion­ing our meth­ods,

Or­ga­ni­za­tions fre­quently re­ward em­ploy­ees who can pro­vide em­pir­i­cal sup­port for ex­ist­ing man­age­rial pref­er­ences.

our mo­ti­va­tion and the way we com­mu­ni­cate our find­ings to oth­ers; and it also makes it easy to un­der-in­vest in data anal­y­sis in the first place. When we feel too con­fi­dent in our un­der­stand­ing, we don’t spend enough time or money in ac­quir­ing more in­for­ma­tion or run­ning fur­ther analy­ses. To make mat­ters worse, more in­for­ma­tion can in­crease over­con­fi­dence with­out in­creas­ing ac­cu­racy. That’s why more data, in and of it­self, is not a guar­an­teed so­lu­tion.

Go­ing from data to in­sight re­quires qual­ity in­puts, skill and sound pro­cesses. Be­cause it can be so dif­fi­cult to rec­og­nize our own bi­ases, good pro­cesses are es­sen­tial for avoid­ing over­con­fi­dence.

OUR AD­VICE: Here are a few pro­ce­dural tips to avoid the over­con­fi­dence trap:

De­scribe your ‘per­fect ex­per­i­ment’ — the type of in­for­ma­tion you would use to answer your ques­tion if you had lim­it­less re­sources for data col­lec­tion and the abil­ity to mea­sure any vari­able. Com­pare this ideal to your ac­tual data to un­der­stand where it might fall short.

Iden­tify places where you might be able to close the gap with more data col­lec­tion or an­a­lyt­i­cal tech­niques. Make it a for­mal part of your process to be your own devil’s ad­vo­cate. In Think­ing Fast and Slow, No­bel Lau­re­ate Daniel Kah­ne­man sug­gests ask­ing your­self why your anal­y­sis might be wrong, and rec­om­mends do­ing this for ev­ery anal­y­sis you per­form. Tak­ing this con­trar­ian view can help you see the flaws in your own ar­gu­ments and re­duce mis­takes across the board.

Be­fore mak­ing a de­ci­sion or launch­ing a project, per­form a ‘pre-mortem’ — an ap­proach sug­gested by psy­chol­o­gist Gary Klein. Ask oth­ers with knowl­edge about the project to imag­ine its fail­ure a year into the fu­ture and to write a story about that fail­ure. In do­ing so, you will ben­e­fit from the wisdom of mul­ti­ple per­spec­tives, while also pro­vid­ing an op­por­tu­nity to bring to the sur­face po­ten­tial flaws in the anal­y­sis that you may other­wise over­look.

Keep track of your pre­dic­tions and sys­tem­at­i­cally com­pare them to what ac­tu­ally hap­pens. Which of your pre­dic­tions turned out to be true and which ones fell short? Persistent bi­ases can creep back into our de­ci­sion-mak­ing, so make these prac­tices part of your reg­u­lar rou­tine.

TRAP #3: THE OVER-FIT­TING TRAP

When your model yields sur­pris­ing or coun­ter­in­tu­itive pre­dic­tions, you may have made an ex­cit­ing new dis­cov­ery — or it may be the re­sult of ‘over-fit­ting’. In The Sig­nal and the Noise, Nate Sil­ver fa­mously dubbed this “the most im­por­tant sci­en­tific prob­lem you’ve never heard of.” This trap oc­curs when a sta­tis­ti­cal model de­scribes ‘ran­dom noise’ rather than the un­der­ly­ing re­la­tion­ship that you need to cap­ture.

Over-fit mod­els gen­er­ally do a sus­pi­ciously good job of ex­plain­ing many nu­ances of what hap­pened in the past, but they have great dif­fi­culty pre­dict­ing the fu­ture. For in­stance, when Google’s ‘Flu Trends’ ap­pli­ca­tion was in­tro­duced in 2008, it was her­alded as an in­no­va­tive way to pre­dict flu out­breaks by track­ing search terms as­so­ci­ated with early flu symp­toms. But early ver­sions of the al­go­rithm looked for cor­re­la­tions be­tween flu out­breaks and mil­lions of search terms. With such a large num­ber of terms, some cor­re­la­tions ap­peared sig­nif­i­cant when they were re­ally es­ti­mated due to chance. Searches for ‘high school bas­ket­ball’, for ex­am­ple, were highly cor­re­lated with the flu. The ap­pli­ca­tion was ul­ti­mately scrapped due to fail­ures of pre­dic­tion only a few years later.

In or­der to over­come this bias, you need to dis­cern be­tween the data that mat­ters and the noise around it.

OUR AD­VICE: Here’s how you can guard against the over­fit­ting trap:

Ran­domly di­vide the data into two sets: a ‘train­ing set’, on

Data can never truly ‘speak for it­self ’. It re­lies on hu­man in­ter­pre­ta­tion to make sense.

which you will es­ti­mate the model, and a ‘val­i­da­tion set’, on which you will test the ac­cu­racy of the model’s pre­dic­tions. An over-fit model might be great at mak­ing pre­dic­tions within the train­ing set, but raise warn­ing flags by per­form­ing poorly in the val­i­da­tion set.

• Much like you would for the con­fir­ma­tion trap, spec­ify the re­la­tion­ships you want to test and how you plan to test them be­fore an­a­lyz­ing the data, to avoid cherry-pick­ing.

• Keep your anal­y­sis sim­ple. Look for re­la­tion­ships that mea­sure im­por­tant ef­fects re­lated to clear and log­i­cal hy­pothe­ses be­fore dig­ging into nu­ances. Be on guard against ‘spu­ri­ous’ cor­re­la­tions — the ones that oc­cur only by chance, that you can rule out based on ex­pe­ri­ence or com­mon sense. Re­mem­ber that data can never truly ‘speak for it­self ’. It re­lies on hu­man in­ter­pre­ta­tion to make sense.

• Con­struct al­ter­na­tive nar­ra­tives. Is there an­other story you could tell with the same data? If so, you can­not be con­fi­dent that the re­la­tion­ship you have un­cov­ered is the right one— or the only one.

• Be­ware of the all-too-hu­man ten­dency to see pat­terns in ran­dom data. For ex­am­ple, con­sider a base­ball player with a .325 bat­ting av­er­age who goes 0-4 in a cham­pi­onship series game. His coach may see a ‘cold streak’ and want to re­place him, but he’s only look­ing at a hand­ful of games. Sta­tis­ti­cally, it would be bet­ter to keep him in the game than sub­sti­tute the .200 hit­ter who went 4-4 in the pre­vi­ous game.

In clos­ing

Data an­a­lyt­ics can be an ef­fec­tive tool to pro­mote con­sis­tent de­ci­sions and shared un­der­stand­ing. It can high­light blind spots in our in­di­vid­ual or col­lec­tive aware­ness and of­fer ev­i­dence of risks and ben­e­fits for par­tic­u­lar paths of ac­tion. But it can also make us com­pla­cent.

Man­agers need to be aware of the com­mon de­ci­sion-mak­ing

pit­falls de­scribed herein and em­ploy sound pro­cesses and cog­ni­tive strate­gies to pre­vent them. It can be dif­fi­cult to rec­og­nize the flaws in your own rea­son­ing, but proac­tively tack­ling these bi­ases with the right mind­set can lead to bet­ter anal­y­sis — and bet­ter de­ci­sions.

Me­gan Mac­garvie is an As­so­ci­ate Pro­fes­sor in the Mar­kets, Pub­lic Pol­icy and Law group at Bos­ton Univer­sity’s Que­strom School of Busi­ness, where she teaches data-driven de­ci­sion­mak­ing and busi­ness an­a­lyt­ics. She is also a Re­search As­so­ci­ate of the Na­tional Bureau of Eco­nomic Re­search. Kristina Mcel­heran is an As­sis­tant Pro­fes­sor of Strate­gic Man­age­ment at the Rot­man School of Man­age­ment and a Dig­i­tal Fel­low at the MIT Ini­tia­tive on the Dig­i­tal Econ­omy. This ar­ti­cle was pub­lished in the HBR Guide to Data An­a­lyt­ics Ba­sics for Man­agers (Har­vard Busi­ness Re­view Press, 2018). Prof. Mcel­heran’s pa­per “The Rapid Adop­tion of Data-driven De­ci­sion Mak­ing”, co-au­thored with MIT’S Erik Bryn­jolf­s­son, can be down­loaded on­line.

Rot­man fac­ulty re­search is ranked in the top 10 glob­ally by the Fi­nan­cial Times.

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.