From Bias ti Better Decisions
Data can be a highly effective decision-making tool. But it can also make us complacent. Leaders need to be aware of three common pitfalls.
can be an effective way to sort through complexDATA ANALYSIS ity and assist our judgment when it comes to making decisions. But even with impressively large data sets and the best analytics tools, we are still vulnerable to a range of decision-making pitfalls — especially when information overload leads us to take shortcuts in reasoning. As a result, in some instances, data and analytics actually make matters worse.
Psychologists, behavioural economists and other scholars have identified several common decision-making traps, many of which stem from the fact that people don’t carefully process every piece of information in every decision. Instead, we rely on heuristics — simplified procedures that allow us to make decisions in the face of uncertainty or when extensive analysis is too costly or time-consuming. These mental shortcuts lead us to believe that we are making sound decisions when in fact, we are making systematic mistakes. What’s more, human brains are wired for certain biases that creep in and distort our choices — typically without our awareness.
There are three main cognitive traps that regularly bias decision-making, even when informed by the best data. We will examine each in detail and provide suggestions for avoiding them.
TRAP #1: THE CONFIRMATION TRAP
When we pay more attention to findings that align with our prior beliefs, while ignoring other facts and patterns in the data, we fall into the confirmation trap. With a huge data set and numerous correlations between variables, analyzing all possible correlations is often both costly and counterproductive. Even with smaller data sets, it can be easy to inadvertently focus on correlations that confirm our expectations of ‘how the world should work’ and dismiss counterintuitive or inconclusive patterns in the data when they don’t align.
Consider the following example: In the late 1960s and early 1970s, researchers conducted one of the most well-designed studies on how different types of fats affect heart health and
mortality. But the results of this study, known as the Minnesota Coronary Experiment, were not published at the time — and a recent New York Times article suggests that this might have been because they contradicted the beliefs of both researchers and the medical establishment. In fact, it wasn’t until recently that the medical journal BMJ published a piece referencing this data, when growing skepticism about the relationship between saturated fat consumption and heart disease led researchers to analyze data from the original experiment — more than 40 years later.
These and similar findings cast doubt on decades of unchallenged medical advice to avoid saturated fats. While it is unclear whether one experiment would have changed standard dietary and health recommendations, this example demonstrates that even with the best possible data, those looking at the numbers can ignore important facts when they contradict the dominant paradigm or don’t confirm their beliefs, with potentially troublesome results.
Confirmation bias becomes that much harder to avoid when individuals face pressure from bosses and peers. Organizations frequently reward employees who can provide empirical support for existing managerial preferences. Those who decide what parts of the data to examine and present to senior managers may feel compelled to choose only the evidence that reinforces what their supervisors want to see or that confirms a prevalent attitude within the firm.
OUR ADVICE: To get a fair assessment of what the data has to say, don’t avoid information that counters your (or your boss’s) beliefs. Instead, embrace it by doing the following: Specify in advance the data and analytical approaches on which you will base your decision, to reduce the temptation to ‘cherry-pick’ findings that agree with your prejudices. Actively seek out findings that disprove your beliefs. Ask yourself, ‘If my expectations are wrong, what pattern would I likely see in the data?’ Enlist a skeptic to help you. Seek out people who like to play ‘devil’s advocate’ or assign contrary positions for active debate.
Don’t automatically dismiss findings that fall below your threshold for statistical or practical significance. Both noisy relationships (i.e. those with large standard errors) and small (i.e. precisely measured) relationships can point to flaws in your beliefs and presumptions. Ask yourself, ‘What would it take for this to appear important?’ Make sure your key takeaway is not sensitive to reasonable changes in your model or sample size.
Assign multiple independent teams to analyze the data separately. Do they come to similar conclusions? If not, isolate and study the points of divergence to determine whether the differences are due to error, inconsistent methods or bias.
Treat your findings like predictions, and test them. If you uncover a correlation from which you think your organization can profit, use an experiment to validate that correlation.
TRAP #2: THE OVERCONFIDENCE TRAP
In their book Judgment in Managerial Decision Making, behavioural researchers Max Bazerman and Don Moore refer to overconfidence as ‘the mother of all biases’. Time and time again, psychologists have found that decision-makers are too sure of themselves. We tend to assume that the accuracy of our judgments or the probability of success in our endeavours is more favourable than the data would suggest.
When there are risks, we bias our reading of the odds to assume we’ll come out on the winning side. Senior decisionmakers who have been promoted based on past successes are especially susceptible to this bias, since they have received positive signals about their decision-making abilities throughout their careers.
Overconfidence also reinforces many other pitfalls of data interpretation: It can prevent us from questioning our methods,
Organizations frequently reward employees who can provide empirical support for existing managerial preferences.
our motivation and the way we communicate our findings to others; and it also makes it easy to under-invest in data analysis in the first place. When we feel too confident in our understanding, we don’t spend enough time or money in acquiring more information or running further analyses. To make matters worse, more information can increase overconfidence without increasing accuracy. That’s why more data, in and of itself, is not a guaranteed solution.
Going from data to insight requires quality inputs, skill and sound processes. Because it can be so difficult to recognize our own biases, good processes are essential for avoiding overconfidence.
OUR ADVICE: Here are a few procedural tips to avoid the overconfidence trap:
Describe your ‘perfect experiment’ — the type of information you would use to answer your question if you had limitless resources for data collection and the ability to measure any variable. Compare this ideal to your actual data to understand where it might fall short.
Identify places where you might be able to close the gap with more data collection or analytical techniques. Make it a formal part of your process to be your own devil’s advocate. In Thinking Fast and Slow, Nobel Laureate Daniel Kahneman suggests asking yourself why your analysis might be wrong, and recommends doing this for every analysis you perform. Taking this contrarian view can help you see the flaws in your own arguments and reduce mistakes across the board.
Before making a decision or launching a project, perform a ‘pre-mortem’ — an approach suggested by psychologist Gary Klein. Ask others with knowledge about the project to imagine its failure a year into the future and to write a story about that failure. In doing so, you will benefit from the wisdom of multiple perspectives, while also providing an opportunity to bring to the surface potential flaws in the analysis that you may otherwise overlook.
Keep track of your predictions and systematically compare them to what actually happens. Which of your predictions turned out to be true and which ones fell short? Persistent biases can creep back into our decision-making, so make these practices part of your regular routine.
TRAP #3: THE OVER-FITTING TRAP
When your model yields surprising or counterintuitive predictions, you may have made an exciting new discovery — or it may be the result of ‘over-fitting’. In The Signal and the Noise, Nate Silver famously dubbed this “the most important scientific problem you’ve never heard of.” This trap occurs when a statistical model describes ‘random noise’ rather than the underlying relationship that you need to capture.
Over-fit models generally do a suspiciously good job of explaining many nuances of what happened in the past, but they have great difficulty predicting the future. For instance, when Google’s ‘Flu Trends’ application was introduced in 2008, it was heralded as an innovative way to predict flu outbreaks by tracking search terms associated with early flu symptoms. But early versions of the algorithm looked for correlations between flu outbreaks and millions of search terms. With such a large number of terms, some correlations appeared significant when they were really estimated due to chance. Searches for ‘high school basketball’, for example, were highly correlated with the flu. The application was ultimately scrapped due to failures of prediction only a few years later.
In order to overcome this bias, you need to discern between the data that matters and the noise around it.
OUR ADVICE: Here’s how you can guard against the overfitting trap:
Randomly divide the data into two sets: a ‘training set’, on
Data can never truly ‘speak for itself ’. It relies on human interpretation to make sense.
which you will estimate the model, and a ‘validation set’, on which you will test the accuracy of the model’s predictions. An over-fit model might be great at making predictions within the training set, but raise warning flags by performing poorly in the validation set.
• Much like you would for the confirmation trap, specify the relationships you want to test and how you plan to test them before analyzing the data, to avoid cherry-picking.
• Keep your analysis simple. Look for relationships that measure important effects related to clear and logical hypotheses before digging into nuances. Be on guard against ‘spurious’ correlations — the ones that occur only by chance, that you can rule out based on experience or common sense. Remember that data can never truly ‘speak for itself ’. It relies on human interpretation to make sense.
• Construct alternative narratives. Is there another story you could tell with the same data? If so, you cannot be confident that the relationship you have uncovered is the right one— or the only one.
• Beware of the all-too-human tendency to see patterns in random data. For example, consider a baseball player with a .325 batting average who goes 0-4 in a championship series game. His coach may see a ‘cold streak’ and want to replace him, but he’s only looking at a handful of games. Statistically, it would be better to keep him in the game than substitute the .200 hitter who went 4-4 in the previous game.
Data analytics can be an effective tool to promote consistent decisions and shared understanding. It can highlight blind spots in our individual or collective awareness and offer evidence of risks and benefits for particular paths of action. But it can also make us complacent.
Managers need to be aware of the common decision-making
pitfalls described herein and employ sound processes and cognitive strategies to prevent them. It can be difficult to recognize the flaws in your own reasoning, but proactively tackling these biases with the right mindset can lead to better analysis — and better decisions.
Megan Macgarvie is an Associate Professor in the Markets, Public Policy and Law group at Boston University’s Questrom School of Business, where she teaches data-driven decisionmaking and business analytics. She is also a Research Associate of the National Bureau of Economic Research. Kristina Mcelheran is an Assistant Professor of Strategic Management at the Rotman School of Management and a Digital Fellow at the MIT Initiative on the Digital Economy. This article was published in the HBR Guide to Data Analytics Basics for Managers (Harvard Business Review Press, 2018). Prof. Mcelheran’s paper “The Rapid Adoption of Data-driven Decision Making”, co-authored with MIT’S Erik Brynjolfsson, can be downloaded online.
Rotman faculty research is ranked in the top 10 globally by the Financial Times.