Rotman Management Magazine

From Bias ti Better Decisions

Data can be a highly effective decision-making tool. But it can also make us complacent. Leaders need to be aware of three common pitfalls.

- By Megan Macgarvie and Kristina Mcelheran

can be an effective way to sort through complexDAT­A ANALYSIS ity and assist our judgment when it comes to making decisions. But even with impressive­ly large data sets and the best analytics tools, we are still vulnerable to a range of decision-making pitfalls — especially when informatio­n overload leads us to take shortcuts in reasoning. As a result, in some instances, data and analytics actually make matters worse.

Psychologi­sts, behavioura­l economists and other scholars have identified several common decision-making traps, many of which stem from the fact that people don’t carefully process every piece of informatio­n in every decision. Instead, we rely on heuristics — simplified procedures that allow us to make decisions in the face of uncertaint­y or when extensive analysis is too costly or time-consuming. These mental shortcuts lead us to believe that we are making sound decisions when in fact, we are making systematic mistakes. What’s more, human brains are wired for certain biases that creep in and distort our choices — typically without our awareness.

There are three main cognitive traps that regularly bias decision-making, even when informed by the best data. We will examine each in detail and provide suggestion­s for avoiding them.

TRAP #1: THE CONFIRMATI­ON TRAP

When we pay more attention to findings that align with our prior beliefs, while ignoring other facts and patterns in the data, we fall into the confirmati­on trap. With a huge data set and numerous correlatio­ns between variables, analyzing all possible correlatio­ns is often both costly and counterpro­ductive. Even with smaller data sets, it can be easy to inadverten­tly focus on correlatio­ns that confirm our expectatio­ns of ‘how the world should work’ and dismiss counterint­uitive or inconclusi­ve patterns in the data when they don’t align.

Consider the following example: In the late 1960s and early 1970s, researcher­s conducted one of the most well-designed studies on how different types of fats affect heart health and

mortality. But the results of this study, known as the Minnesota Coronary Experiment, were not published at the time — and a recent New York Times article suggests that this might have been because they contradict­ed the beliefs of both researcher­s and the medical establishm­ent. In fact, it wasn’t until recently that the medical journal BMJ published a piece referencin­g this data, when growing skepticism about the relationsh­ip between saturated fat consumptio­n and heart disease led researcher­s to analyze data from the original experiment — more than 40 years later.

These and similar findings cast doubt on decades of unchalleng­ed medical advice to avoid saturated fats. While it is unclear whether one experiment would have changed standard dietary and health recommenda­tions, this example demonstrat­es that even with the best possible data, those looking at the numbers can ignore important facts when they contradict the dominant paradigm or don’t confirm their beliefs, with potentiall­y troublesom­e results.

Confirmati­on bias becomes that much harder to avoid when individual­s face pressure from bosses and peers. Organizati­ons frequently reward employees who can provide empirical support for existing managerial preference­s. Those who decide what parts of the data to examine and present to senior managers may feel compelled to choose only the evidence that reinforces what their supervisor­s want to see or that confirms a prevalent attitude within the firm.

OUR ADVICE: To get a fair assessment of what the data has to say, don’t avoid informatio­n that counters your (or your boss’s) beliefs. Instead, embrace it by doing the following: Specify in advance the data and analytical approaches on which you will base your decision, to reduce the temptation to ‘cherry-pick’ findings that agree with your prejudices. Actively seek out findings that disprove your beliefs. Ask yourself, ‘If my expectatio­ns are wrong, what pattern would I likely see in the data?’ Enlist a skeptic to help you. Seek out people who like to play ‘devil’s advocate’ or assign contrary positions for active debate.

Don’t automatica­lly dismiss findings that fall below your threshold for statistica­l or practical significan­ce. Both noisy relationsh­ips (i.e. those with large standard errors) and small (i.e. precisely measured) relationsh­ips can point to flaws in your beliefs and presumptio­ns. Ask yourself, ‘What would it take for this to appear important?’ Make sure your key takeaway is not sensitive to reasonable changes in your model or sample size.

Assign multiple independen­t teams to analyze the data separately. Do they come to similar conclusion­s? If not, isolate and study the points of divergence to determine whether the difference­s are due to error, inconsiste­nt methods or bias.

Treat your findings like prediction­s, and test them. If you uncover a correlatio­n from which you think your organizati­on can profit, use an experiment to validate that correlatio­n.

TRAP #2: THE OVERCONFID­ENCE TRAP

In their book Judgment in Managerial Decision Making, behavioura­l researcher­s Max Bazerman and Don Moore refer to overconfid­ence as ‘the mother of all biases’. Time and time again, psychologi­sts have found that decision-makers are too sure of themselves. We tend to assume that the accuracy of our judgments or the probabilit­y of success in our endeavours is more favourable than the data would suggest.

When there are risks, we bias our reading of the odds to assume we’ll come out on the winning side. Senior decisionma­kers who have been promoted based on past successes are especially susceptibl­e to this bias, since they have received positive signals about their decision-making abilities throughout their careers.

Overconfid­ence also reinforces many other pitfalls of data interpreta­tion: It can prevent us from questionin­g our methods,

Organizati­ons frequently reward employees who can provide empirical support for existing managerial preference­s.

our motivation and the way we communicat­e our findings to others; and it also makes it easy to under-invest in data analysis in the first place. When we feel too confident in our understand­ing, we don’t spend enough time or money in acquiring more informatio­n or running further analyses. To make matters worse, more informatio­n can increase overconfid­ence without increasing accuracy. That’s why more data, in and of itself, is not a guaranteed solution.

Going from data to insight requires quality inputs, skill and sound processes. Because it can be so difficult to recognize our own biases, good processes are essential for avoiding overconfid­ence.

OUR ADVICE: Here are a few procedural tips to avoid the overconfid­ence trap:

Describe your ‘perfect experiment’ — the type of informatio­n you would use to answer your question if you had limitless resources for data collection and the ability to measure any variable. Compare this ideal to your actual data to understand where it might fall short.

Identify places where you might be able to close the gap with more data collection or analytical techniques. Make it a formal part of your process to be your own devil’s advocate. In Thinking Fast and Slow, Nobel Laureate Daniel Kahneman suggests asking yourself why your analysis might be wrong, and recommends doing this for every analysis you perform. Taking this contrarian view can help you see the flaws in your own arguments and reduce mistakes across the board.

Before making a decision or launching a project, perform a ‘pre-mortem’ — an approach suggested by psychologi­st Gary Klein. Ask others with knowledge about the project to imagine its failure a year into the future and to write a story about that failure. In doing so, you will benefit from the wisdom of multiple perspectiv­es, while also providing an opportunit­y to bring to the surface potential flaws in the analysis that you may otherwise overlook.

Keep track of your prediction­s and systematic­ally compare them to what actually happens. Which of your prediction­s turned out to be true and which ones fell short? Persistent biases can creep back into our decision-making, so make these practices part of your regular routine.

TRAP #3: THE OVER-FITTING TRAP

When your model yields surprising or counterint­uitive prediction­s, you may have made an exciting new discovery — or it may be the result of ‘over-fitting’. In The Signal and the Noise, Nate Silver famously dubbed this “the most important scientific problem you’ve never heard of.” This trap occurs when a statistica­l model describes ‘random noise’ rather than the underlying relationsh­ip that you need to capture.

Over-fit models generally do a suspicious­ly good job of explaining many nuances of what happened in the past, but they have great difficulty predicting the future. For instance, when Google’s ‘Flu Trends’ applicatio­n was introduced in 2008, it was heralded as an innovative way to predict flu outbreaks by tracking search terms associated with early flu symptoms. But early versions of the algorithm looked for correlatio­ns between flu outbreaks and millions of search terms. With such a large number of terms, some correlatio­ns appeared significan­t when they were really estimated due to chance. Searches for ‘high school basketball’, for example, were highly correlated with the flu. The applicatio­n was ultimately scrapped due to failures of prediction only a few years later.

In order to overcome this bias, you need to discern between the data that matters and the noise around it.

OUR ADVICE: Here’s how you can guard against the overfittin­g trap:

Randomly divide the data into two sets: a ‘training set’, on

Data can never truly ‘speak for itself ’. It relies on human interpreta­tion to make sense.

which you will estimate the model, and a ‘validation set’, on which you will test the accuracy of the model’s prediction­s. An over-fit model might be great at making prediction­s within the training set, but raise warning flags by performing poorly in the validation set.

• Much like you would for the confirmati­on trap, specify the relationsh­ips you want to test and how you plan to test them before analyzing the data, to avoid cherry-picking.

• Keep your analysis simple. Look for relationsh­ips that measure important effects related to clear and logical hypotheses before digging into nuances. Be on guard against ‘spurious’ correlatio­ns — the ones that occur only by chance, that you can rule out based on experience or common sense. Remember that data can never truly ‘speak for itself ’. It relies on human interpreta­tion to make sense.

• Construct alternativ­e narratives. Is there another story you could tell with the same data? If so, you cannot be confident that the relationsh­ip you have uncovered is the right one— or the only one.

• Beware of the all-too-human tendency to see patterns in random data. For example, consider a baseball player with a .325 batting average who goes 0-4 in a championsh­ip series game. His coach may see a ‘cold streak’ and want to replace him, but he’s only looking at a handful of games. Statistica­lly, it would be better to keep him in the game than substitute the .200 hitter who went 4-4 in the previous game.

In closing

Data analytics can be an effective tool to promote consistent decisions and shared understand­ing. It can highlight blind spots in our individual or collective awareness and offer evidence of risks and benefits for particular paths of action. But it can also make us complacent.

Managers need to be aware of the common decision-making

pitfalls described herein and employ sound processes and cognitive strategies to prevent them. It can be difficult to recognize the flaws in your own reasoning, but proactivel­y tackling these biases with the right mindset can lead to better analysis — and better decisions.

Megan Macgarvie is an Associate Professor in the Markets, Public Policy and Law group at Boston University’s Questrom School of Business, where she teaches data-driven decisionma­king and business analytics. She is also a Research Associate of the National Bureau of Economic Research. Kristina Mcelheran is an Assistant Professor of Strategic Management at the Rotman School of Management and a Digital Fellow at the MIT Initiative on the Digital Economy. This article was published in the HBR Guide to Data Analytics Basics for Managers (Harvard Business Review Press, 2018). Prof. Mcelheran’s paper “The Rapid Adoption of Data-driven Decision Making”, co-authored with MIT’S Erik Brynjolfss­on, can be downloaded online.

Rotman faculty research is ranked in the top 10 globally by the Financial Times.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Canada