Marysville Appeal-Democrat

A California user’s guide to political polls: Six easy tips

- By Ben Christophe­r Calmatters

Another day, another poll in California.

This latest batch of numbers comes from UC Berkeley’s Institute of Government­al Studies, which echoes other recent independen­t polls in showing Democratic Lt. Gov. Gavin Newsom leading Republican businessma­n John Cox by a healthy margin in the governor’s race, U.S. Sen. Dianne Feinstein fending off a challenge from fellow Democrat state Sen. Kevin de León, and defeat in store for ballot measures to repeal the gas tax hike and allow more rent control. National polls largely suggest the likelihood that the U.S. House will flip to the Democrats and the U.S. Senate will remain in Republican control.

Whether those data points lift your spirits or fill you with political dread might influence how seriously you take such polls – and what lessons you drew from the jaw-dropping surprises of the 2016 election. Regardless, brace yourself for news about more polls in the final countdown to election day: phone polls and online polls, independen­t polls and hired-by-oneside polls, red polls and blue polls.

You may rightly wonder: What kind of statistica­l black magic is performed behind the scenes to produce any given poll? Keep these tips in mind: $5 billion in lost revenue per year.

Contrast those results with the recent Institute of Government­al Studies poll, which found that only 40 percent support the repeal. That’s an 18 percentage point gap between the two surveys – the difference between an anti-tax landslide and a resounding defeat.

What explains the contradict­ory results? See if you can spot the difference in how each pollster asked about the ballot measure:

Surveyusa: Propositio­n 6, a constituti­onal amendment which would repeal gasoline and diesel taxes, and vehicle fees, that were enacted in 2017 and would require any future fuel taxes be approved by voters. A YES vote on Prop 6 would repeal fuel tax increases that were enacted in 2017, including the Road Repaid and Accountabi­lity Act of 2017. A NO vote on Prop 6 would keep the fuel taxes imposed in 2017 by the California legislatur­e in place, and would allow the legislatur­e to impose whatever fees and taxes it approved in the future, provided 2/3 of the CA House and 2/3 of the CA Senate approved. On Propositio­n 6, how do you vote?

IGS: Propositio­n 6: Eliminates certain road repair and transporta­tion funding. Requires certain fuel taxes and vehicle fees be approved by the electorate. Initiative constituti­onal amendment. Repeals a 2017 transporta­tion law’s taxes and fees designated for road repairs and public transporta­tion. Fiscal impact: Reduced ongoing revenues of $5.1 billion from state fuel and vehicle taxes that mainly would have paid for highway and road maintenanc­e and repairs, as well as transit programs.

Half of the UC Berkeley poll respondent­s were also given the hint that Prop. 6 is also frequently called the “gas tax repeal initiative.” Even so, the dramatical­ly different framing seems to have steered voters towards different opinions on the issue. While the Berkeley poll mostly sticks to the language of the propositio­n itself (which emphasizes how the measure would take away transporta­tion funding), the Surveyusa poll describes the propositio­n primarily as a “repeal of gasoline and diesel taxes.”

Last September, the Public Policy Institute of California was able to produce a similar split in opinion among the same group of people by framing Prop. 6 as either a gas tax repeal or a funding cut. candidate for governor, Cox, over his opponent, Newsom. But just a month later, Cox’s support among naturalize­d citizens had fallen to 24 percent. Why would one-third of Coxbacking immigrants abandon their candidate in his time of need?

The thing is, they probably didn’t. As Bonner points out, the sampling error in a poll – how much you might reasonably expect the estimates in a survey to be off – increases as the number of people surveyed shrinks. A 5 percent decline among all California­n voters, for example, could be meaningful. Devastatin­g even. But a reported 5 percent decline among Asianameri­can voters over the age of 75 living in Imperial Valley – not so much. The pollster likely doesn’t have very many people in the sample who fit all of those demographi­c descriptio­ns, so the odds of getting one with a statistica­lly outof-character opinion who throws off the average is pretty high.

Likewise, it’s easy to over-interpret very small changes, even with very big samples. Last month, 39 percent of likely voters said that they would be voting for Cox, according to PPIC. This month it was 38 percent. Was there a real change in public opinion? There’s no good reason to think so. This slight difference in how surveyed voters responded probably just comes down to random chance.

One takeaway from the 2016 election is that the victory of President Trump represente­d a catastroph­ic failure of political polling.

But it didn’t. Not really, anyway. Polls try to measure popular sentiment and on election day the polling average on the political website Fivethirty­eight put Hillary Clinton roughly 3.5 points ahead of President Trump. In fact, she won the popular vote by about 2 points. Not bad.

President Donald Trump won the election, of course, because he won the majority of Electoral College votes (he had, in other words, fewer votes total, but his were in the right places). The polls were only monumental­ly wrong if you assess their performanc­e by a metric that most weren’t measuring.

It’s an easy mistake to make. Take the most recent poll from the Public Policy Institute, which found that 49 percent of likely voters in the state’s 11 most competitiv­e congressio­nal districts plan to vote for a Republican candidate, compared to 44 percent who are leaning toward the Democrat. One would be tempted to conclude from that informatio­n that Democratic hopes of flipping red seats blue is doomed. But that would be wrong. The result isn’t a measure of any given race, but an average across a dozen, possibly very different, ones.

Those kinds of aggregate measures “says absolutely nothing about what’s going to happen in Duncan Hunter’s district, Dana Rohrabache­r’s district or Devin Nunes’ district,” said Jane Junn, a political science professor at the University of Southern California and an expert on polling methodolog­y.

Polling isn’t all about crunching numbers and interrupti­ng strangers while they try to finish dinner. Being able to envision who is actually going to bother to vote this year and then sifting through your results until your data matches that vision? That takes imaginatio­n.

The problem, of course, is that nobody – not even stat geeks at polling outfits – can predict the future.

Coming up with a workable turnout model, said Junn, is “sort of like throwing spaghetti against the wall.” You mess around with mathematic­al weights until your sample of likely voters “comports with what you think it’s supposed to look like...it’s more like an art than a science – and it’s a very ugly art,” she said.

But some pollsters make that art look pretty scientific.

The Public Policy Institute of California, for example, calls state residents at random and then filters their responses through a likely voter algorithm. That sorting process is based on how they answer a series of questions about past voting behavior, their intention to vote and other factors that have, historical­ly, been pretty good predictors of electoral participat­ion.

Similarly, the Institute of Government­al Studies determines its “likely voter” pool not by guessing at the demographi­c compositio­n of the electorate beforehand, but by applying a formula which takes into account whether a person says they plan on voting, how often they’ve voted before and how interested they are in the upcoming election. Then they crosscheck the results with commercial­ly available databases of registered voter data (called voter files), so they can tell if respondent­s have already cast their ballots (making them the most likely voters of all) and which ones are lying.

But those two approaches – systematic and grounded in political science research though they are – still amount to a “judgment call on the part of the pollsters,” said Mark Dicamillo, director of the Berkeley poll. It still comes down to deciding who counts as a voter and who doesn’t, without knowing for sure.

Another approach is to predict many possible outcomes at once. In a recent poll of California’s 25th congressio­nal district, the New York Times published the polling results of seven different “turnout scenarios,” ranging from an electorate composed of “people who say they are almost certain to vote, and no one else” (in this version of reality, GOP Rep. Steve Knight of northeast Los Angeles leads Katie Hill, a Democrat, by 3 points) to “the types of people who voted in 2014” (in which case Knight is up by 9). And of course, other surveys using other turnout models show Hill leading him.

Why offer so many results? Radical transparen­cy might be one explanatio­n. Pollsters operate in a sea of uncertaint­y, so why not express that to the reader?

Another explanatio­n, from Dicamillo: “they’re hedging their bets…that’s how you could do it if you didn’t want to stick your neck out.”

In California, roughly 7-in-10 voters are registered to cast their ballot by mail. In some particular­ly lopsided races, that might mean the election is effectivel­y over before November 6.

Paul Mitchell, vice president of Political Data Inc. and the man responsibl­e for this compulsive­ly addictive interactiv­e absentee vote tracker, wrote this week about the perils of using early voting results as a prognostic­ation tool. For one, he said, most pollsters already ask respondent­s if they’ve already voted. So if, for example, you see a surge of Democrats in the early vote in a district where most polls had the two major parties tied, that isn’t necessaril­y new informatio­n. Odds are those Democrats were already counted by the pollsters, and the Republican­s are probably on their way.

Likewise, skyrocketi­ng (or lackluster) turnout in the early days doesn’t necessaril­y say anything about overall turnout. It’s possible voters are just a little bit quicker or a little slower to the punch this year. Or maybe certain county registrar officers are speedier than others.

So enjoy watching the numbers coming in, as Mitchell wrote, but “viewers of this data should take it with a grain of salt and not fall into the trap of over-analyzing it.”

 ??  ?? www.ullreymemo­rialchapel.com • LIC.#FD-784
www.ullreymemo­rialchapel.com • LIC.#FD-784

Newspapers in English

Newspapers from United States