San Diego Union-Tribune (Sunday)

MAKE POLLS GREAT AGAIN

- DAVID HILL Hill is director of Hill Research Consultant­s and a 2020 fellow at the University of Southern California’s Dornsife Center for the Political Future. This initially ran in The Washington Post.

There’s a dirty little secret that we pollsters need to own up to: People don’t talk to us anymore, and it’s making polling less reliable.

When I first undertook telephone polling in the early 1980s, I could start with a cluster of five demographi­cally similar voters — say, Republican moms in their 40s in a Midwestern suburb — and expect to complete at least one interview from that group of five. I’d build a sample of 500 different clusters of five voters per cluster, or 2,500 voters total. From that number, I could be reasonably assured that 500 people would talk to us. The 500 clusters were designed to represent a diverse crosssecti­on of the electorate.

As the years drifted by, it took more and more voters per cluster for us to get a single voter to agree to an interview. Between 1984 and 1989, when caller ID was rolled out, more voters began to ignore our calls. The advent of answering machines and then voicemail further reduced responses. Voters screen their calls more aggressive­ly, so cooperatio­n with pollsters has steadily declined yearby-year. Whereas once I could extract one complete interview from five voters, it can now take calls to as many as 100 voters to complete a single interview, even more in some segments of the electorate.

And here’s the killer detail: That single cooperativ­e soul who speaks with an interviewe­r cannot possibly hold the same opinions as the 99 other voters who refused.

In short, we no longer have truly random samples that support claims that poll results accurately represent opinions of the electorate.

Instead, we have samples of “the willing,” what researcher­s call a “convenienc­e sample” of those consenting to give us their time and opinions. Despite knowledge of this, pollsters (including myself ) have glossed over this reality by dressing up our results with claims of polls having a “margin of error” of 3 or 4 percentage points when we knew, or should have known, that the error factor is incalculab­le given the non-random sample. Most pollsters turned to weighting results to “fix” variations in cooperatio­n, but this can inadverten­tly amplify sampling errors due to noncoopera­tion.

For a while, most polls conducted most of the time in most places seemed reasonably accurate, so we kept at it, claiming random sample surveys with low margins of error. Weighting became a Band-aid for noncoopera­tion. And polling still seemed better than hoisting a wet finger to the political winds. Then came the past two presidenti­al elections, exposing deeper wounds.

I offer my own experience from Florida in the 2020 election to illustrate the problem. I conducted tracking polls in the weeks leading up to the presidenti­al election. To complete 1,510 interviews over several weeks, we had to call 136,688 voters. In hard-to-interview Florida, only 1 in 90-odd voters would speak with our interviewe­rs. Most calls to voters went unanswered or rolled over to answering machines or voicemail, never to be interviewe­d despite multiple attempts.

The final wave of polling, conducted Oct. 25-27 to complete 500 interviews, was the worst for cooperatio­n. We could finish interviews with only fourtenths of 1 percent from our pool of potential respondent­s. As a result, this supposed “random sample survey” seemingly yielded, as did most all Florida polls, lower support for President Donald Trump than he earned on Election Day.

After the election, I noted wide variations in completion rates across different categories of voters, but nearly all were still too low for any actual randomness to be assumed or implied.

Many voters who fit the ‘Likely Trump Supporter’ profile were not willing to do an interview.

Many voters who fit the “Likely Trump Supporter” profile were not willing to do an interview. It was especially hard to interview older men. Similarly, we were less likely to complete interviews with Trump households in Miami’s media market. Whatever the motivation, this behavior almost certainly introduced bias into poll results, dampening apparent support for Trump.

Pollsters and poll readers can anticipate low and variable cooperatio­n rates to persist, underminin­g randomness. In anticipati­on of this, cooperatio­n rates need to be published with all polls, to add a dash of real-world sobriety to our weighing of poll results. Presently, this is very rarely done for public or private political polling. If you don’t believe me, ask your pollster for his “dispositio­n of sample” report and get ready for some cagey equivocati­on.

Some say online polling will help, and it may. But most online polling uses non-random samples from pre-recruited “panels” of voters who have signed up to be interviewe­d, typically for some incentive. And online surveys have serious data quality or integrity issues. Most voters rush through them too rapidly for real thought. And we cannot verify that online voters are indeed registered to vote or have the requisite vote history they may claim.

One promising approach to making online samples more verifiable and random is by texting interview requests to a genuinely random sample of those on the voter rolls. But standing pat on the old ways, or denying the non-randomness of today’s polls, won’t make polls great again.

Newspapers in English

Newspapers from United States