Election predictions were a disaster — again. It’s time for pollsters to self-correct
Election Day 2020 was the end of a very bad cycle for the media and university pollsters.
Too many media polls rely on bad samples — i.e. turnout models that fall far short of the actual Election Day voter demographics. Most notable is the tendency to oversample Democrats and under sample Republicans. I saw that as early as 1994 and throughout most of the Presidential races since 1996. Too many pollsters are located in New York, Washington or on campuses — where a Republican is an occasional “sighting,” not a substantial reality, in a context of insularity where Republicans who “have their guns and religion” are seen as a “basketful of deplorables.”
Many independent pollsters don’t take voters’ party identification as seriously as they should. In 2020, when 95 percent of Republicans supported President Trump and 94 percent of Democrats supported Joe Biden, it matters greatly if one party is overrepresented in your sample and the other group is not represented enough. Too often, pollsters would report results with as many as 44 percent Democrats and as few as 28 percent Republicans in their samples. Small wonder their results would reveal a double-digit lead for Biden.
As a pollster myself, I weight my samples for party identification recognizing that Democrats more likely to respond to surveys. Self-identified Republicans have been reluctant to answer telephone surveys because of wariness of a liberal elite or because talk radio and social media alert them to not trust polls.
In any of my pre-weighted samples, party identification difference have never amounted to more than a point or two here because I use previous exit polls for what turnout has looked like and follow it up with current trends. For example, is there any reason to believe that there will more or fewer Democrats or Republicans, more or fewer Blacks or Hispanics, more or fewer evangelical Christians and so on? This is artistry, rooted in long years spent enumerating, capturing the heart and soul, and defining the drivers of real voting patterns. It helps that I live in a region where Republican and Democratic lawn signs are common and as is talk among those who are passionate about both sides.
In the past few presidential elections, 37 percent to 38 percent identified as Democrats and 34 percent to 35 percent said they were Republicans. In 2020, we knew that enthusiasm was high on both sides, and there would be a great turnout among, for instance, Blacks and evangelicals. Voter-registration levels were high among women and Hispanics. So we felt comfortable making our samples reflect a 38 percent to 34 percent Democrat-to-Republican ratio. Incidentally, one key way to measure enthusiasm is not simply to ask it but to see how many undecided voters there are within key subgroups. According to the exit polls, actual turnout was 37 percent Democrat and 35 percent Republican.
Party identification is a lead variable in people’s lives and must be treated as an important demographic.
The polling industry is governed by the principle that, “We are always going to conduct business the way we have always done it.” Thus, there is an unwillingness to adapt to new technologies. Response rates averaged 65 percent. We have had plenty of warning over the years that the telephone would become less useful as a research-friendly tool. Response rates can be as low as single digits when calling landlines and infinitesimal when calling cell phones. Internet access is near universal among likely voters; respondents can answer polls when it is more convenient, and online samples yield a better distribution of harder-to-reach groups such as young people and nonwhites.
Too often, the quest for academic objectivity is often reduced to asking completely meaningless questions, lacking any human dimension. What are the real drivers for voting decisions. Remember, we are polling people, not simply data.
Finally, we pollsters need to set realistic expectations for what polls can tell us and what they cannot. We have to stop the silliness that we can predict final outcomes down to a 10th of a percentile. What we should be communicating is whether or not a race is close, which groups tend to support one candidate or another, what needs to happen for one candidate to surpass another and what the trend line shows.
My presidential polls often have been among the most accurate. We were showing a much closer race in 2020 and ended with a 5.6 percentage lead for Biden over Trump.
Final numbers will reveal somewhere around at least a 3.3-point edge or even higher. Polls that showed a double-digit lead were not only wrong, they were seriously misleading. It is time for a serious selfassessment of what went wrong with those polls.
John Zogby founded the Zogby Poll in 1984. He currently is senior partner at John Zogby Strategies, which he co-founded with his sons.