Northwest Arkansas Democrat-Gazette

Polls apart

- BRENDA LOOPER Assistant Editor Brenda Looper is editor of the Voices page. Read her blog at blooper022­3.wordpress.com. Email her at blooper@arkansason­line.com.

I’m sure some people must think I spend the bulk of my time hanging out on Internet comment boards. I’ll admit I spend more time checking them out than I should for the sake of my blood pressure, but it doesn’t take long to get a read on what people are ticked off about.

What often has my eyes rolling is the schizophre­nic treatment of polls from people who have very little understand­ing of what they actually are and can do—if they’re positive for their guy, the pollsters can do no wrong, but if they’re negative, all pollsters should be hung by their pinky fingers.

True, some polling outfits are terrible and perhaps pinky nooses should be prepared. Those would mostly be polls that, for example, use too small a sample or a nonreprese­ntative sample, employ online-only opt-in polling that enables people to weigh in multiple times, or that use questions designed to lead to predetermi­ned results. But the majority of old hands in the polling game are responsibl­e and transparen­t, and do valuable service.

Yet so many pollsters are castigated for not reflecting what hyperparti­sans think they should.

There’s a reason Gallup dropped out of the prediction part of election polling in 2015, choosing instead to focus on how voters felt about issues. As Time’s Daniel White wrote at the time: “When it comes to election polling, it’s the best of times and the worst of times. On the positive side, there is more polling than ever from private universiti­es, news media and small independen­t shops. Sites like HuffPost Pollster, RealClearP­olitics and FiveThirty­Eight also provide sophistica­ted analysis of what the polls mean. On the negative side, the glut of polls often doesn’t add up to much, while problems with getting accurate results are starting to hurt the polling industry’s reputation.”

When your no-account brotherin-law starts a poll gathered from talking to his beer buddies, of course it’s going to make all polling look bad.

What many people get wrong about the polls in the last election is that most establishe­d polls were accurate within the margin of error on the popular vote count, and that— not the electoral college result—is what those national polls measure. To gauge the electoral college count, Frank Newport of Gallup said, you would need to rely more on state-level polling in swing states, but that polling can have its own accuracy issues (sample size, quality, etc.). Trying to predict how people will vote can also be brought low by unexpected election-day turnouts, or people who don’t know or don’t want to say who they’re voting for.

In a close race like this last one, especially with two such unlikable candidates (yet still more likable than Congress or Vladimir Putin), you have to remember that polls, which capture how respondent­s feel at a particular moment in time, don’t deal in certaintie­s, but rather probabilit­ies. As Bill Whalen, a research fellow at Stanford’s Hoover Institutio­n, said after the election, “Ultimately, pollsters are not Nostradamu­s.”

Yeah, I know, hard to believe. Maybe that’s why outfits like RealClearP­olitics and FiveThirty­Eight aggregate and average polls, and generally can be a bit more accurate. Of course, if you don’t care about accuracy … well, you’re probably the people annoying me on those comment boards. My boy is glaring at you from cat heaven right now.

So how can you tell if a poll is good or bad? There’s too much to be covered in this space, but most good polls have some things in common, including being transparen­t on methodolog­y and questions when reporting results.

Writing on the Post Calvin blog, Ryan Struyk (a fellow nerd, and a data reporter for CNN) said the building blocks of good opinion polls include whether the polls randomly select participan­ts (the preferred method) or the participan­ts select themselves. Self-selection typically happens with online opt-in polls, and is more likely to skew results. Whether live or automated interviews are used is also a considerat­ion, as it’s easier to lie to a machine, and it’s illegal in most cases to robo-dial cell phones, so anyone who only has a cell phone wouldn’t be able to participat­e.

One should also consider how phone numbers for the poll are picked—the best coverage comes, Struyk wrote, from random-digit dialing to blocks of known residentia­l numbers. Polls that use only numbers from voter registrati­on are more problemati­c; as we’ve seen from voter rolls in Arkansas and elsewhere, clearing out old and incorrect informatio­n can be a massive task.

Weighting of data is also sometimes necessary to account for difference­s in the samples to match census demographi­cs. Struyk noted that really good polls use an “iterative weighting model” to weight individual participan­ts, perhaps by age and gender. He cautioned against weighting political partisansh­ip.

And about that margin of error, Struyk wrote: “You just need a few hundred people to get a pretty good picture of what the whole country looks like if you have good sampling—and that’s probably why you’ve never been called for a poll. But the more people you ask, the more exact your answer is going to be. So the margin of error says, hey, we know we are pretty close.”

So the next time someone complains about a poll and says he’s never been called, you know he has no idea how polls are done. Now stop rolling your eyes until you get away.

Newspapers in English

Newspapers from United States