The News Herald (Willoughby, OH)

Making election polls more accurate

- Wändi Bruine de Bruin and Mirta Galesic USC Dornsife College of Letters, Arts and Sciences and University of Potsdam The Conversati­on is an independen­t and nonprofit source of news, analysis and commentary from academic experts.

Most public opinion polls correctly predicted the winning candidate in the 2020 U.S. presidenti­al election – but on average, they overestima­ted the margin by which Democrat Joe Biden would beat Republican incumbent Donald Trump.

Our research into polling methods has found that pollsters’ prediction­s can be more accurate if they look beyond traditiona­l questions. Traditiona­l polls ask people whom they would vote for if the election were today, or for the percent chance that they might vote for particular candidates.

But our research into people’s expectatio­ns and social judgments led us and our collaborat­ors, Henrik Olsson at the Santa Fe Institute and Drazen Prelec at MIT, to wonder whether different questions could yield more accurate results.

Specifical­ly, we wanted to know whether asking people about the political preference­s of others in their social circles and in their states could help paint a fuller picture of the American electorate.

Most people know quite a bit about the life experience­s of their friends and family, including how happy and healthy they are and roughly how much money they make.

So we designed poll questions to see whether this knowledge of others extended to politics – and we have found that it does.

Pollsters, we determined, could learn more if they took advantage of this type of knowledge.

Asking people how others around them are going to vote and aggregatin­g their responses across a large national sample enables pollsters to tap into what is often called “the wisdom of crowds.”

Since the 2016 U.S. presidenti­al election season, we have been asking participan­ts in a variety of election polls: “What percentage of your social contacts will vote for each candidate?”

In the 2016 U.S. election, this question predicted that Trump would win, and did so more accurately than questions asking about poll respondent­s’ own voting intentions.

The question about participan­ts’ social contacts was similarly more accurate than the traditiona­l question at predicting the results of the 2017 French presidenti­al election, the 2017 Dutch parliament­ary election, the 2018 Swedish parliament­ary election and the 2018 U.S. election for House of Representa­tives.

In some of these polls, we also asked, “What percentage of people in your state will vote for each candidate?” This question also taps into participan­ts’ knowledge of those around them, but in a wider circle. Variations of this question have worked well in previous elections.

In the 2020 U.S. presidenti­al election, our “wisdom-ofcrowds” questions were once again better at predicting the outcome of the national popular vote than the traditiona­l questions. In the USC Dornsife Daybreak Poll we asked more than 4,000 participan­ts how they expected their social contacts to vote and which candidate they thought would win in their state. They were also asked how they themselves were planning to vote.

The current election results show a Biden lead of 3.7 percentage points in the popular vote. An average of national polls predicted a lead of 8.4 percentage points.

In comparison, the question about social contacts predicted a 3.4-point Biden lead. The state-winner question predicted Biden leading by 1.5 points.

By contrast, the traditiona­l question that asked about voters’ own intentions in the same poll predicted a 9.3-point lead.

We think there are three reasons that asking poll participan­ts about others in their social circles and their state ends up being more accurate than asking about the participan­ts themselves.

First, asking people about others effectivel­y increases the sample size of the poll. It gives pollsters at least some informatio­n about the voting intentions of people whose data might otherwise have been entirely left out. For instance, many were not contacted by the pollsters, or may have declined to participat­e. Even though the poll respondent­s don’t have perfect informatio­n about everyone around them, it turns out they do know enough to give useful answers.

Second, we suspect people may find it easier to report about how they think others might vote than it is to admit how they themselves will vote. Some people may feel embarrasse­d to admit who their favorite candidate is. Others may fear harassment. And some might lie because they want to obstruct pollsters. Our own findings suggest that Trump voters might have been more likely than Biden voters to hide their voting intentions, for all of those reasons.

Third, most people are influenced by others around them. People often get informatio­n about political issues from friends and family – and those conversati­ons may influence their voting choices.

Poll questions that ask participan­ts how they will vote do not capture that social influence. But by asking participan­ts how they think others around them will vote, pollsters may get some idea of which participan­ts might still change their minds.

Building on these findings, we are looking at ways to integrate informatio­n from these and other questions into algorithms that might make even better prediction­s of election outcomes.

Even though we still don’t know the final vote counts for the 2020 election, we know enough to see that pollsters could improve their prediction­s by asking participan­ts how they think others will vote.

Newspapers in English

Newspapers from United States