Pub­lic-opin­ion polling is not, and never was, bro­ken

Re­cent elec­tions show that it still works, when per­formed cor­rectly, write Anna Greenberg and Jeremy Ros­ner

The Washington Post Sunday - - OUTLOOK - Twit­ter: @An­na_Green­berg Anna Greenberg and Jeremy Ros­ner are part­ners at Greenberg Quin­lan Ros­ner, a global polling and cam­paign strat­egy firm.

Global fi­nan­cial mar­kets and Euro­pean Union lead­ers heaved a sigh of re­lief after the first round of the French pres­i­den­tial elec­tion last month. And so did pro­fes­sional poll­sters. It turns out the polls were not bro­ken in France, as they were said to have been in the Amer­i­can pres­i­den­tial elec­tion and the Bri­tish Brexit vote.

But in truth, France wasn’t a de­par­ture. The polls in the United States and Bri­tain gen­er­ally worked well. As a new re­port this past week from the Amer­i­can As­so­ci­a­tion for Pub­lic Opin­ion Re­search pointed out, na­tional sur­veys in the U.S. cam­paign “were gen­er­ally cor­rect and ac­cu­rate by his­tor­i­cal stan­dards.” Al­though polling faces real chal­lenges, no­body has re­pealed the laws of statis­tics: When polling is done well, it con­tin­ues to pro­duce re­li­able re­sults.

Here, na­tion­wide polls ac­cu­rately pre­dicted Hil­lary Clin­ton’s mar­gin of vic­tory in the pop­u­lar vote. She won by 2.1 per­cent­age points, while the av­er­age polling mar­gin on Elec­tion Day was Clin­ton by 3.2 points. In Bri­tain, our poll for a think tank showed a nar­row ini­tial pref­er­ence to “re­main” in the Euro­pean Union across likely vot­ers but an ad­van­tage for “leave” — the win­ning re­sult — after vot­ers lis­tened to ar­gu­ments on both sides.

Yes, polling in re­cent years has had to grap­ple with ma­jor chal­lenges, from low re­sponse rates to non-re­sponse bias, in which some groups choose not to par­tic­i­pate (there is ev­i­dence of this to some ex­tent among Don­ald Trump vot­ers). But none of th­ese prob­lems means that the ba­sic sci­ence be­hind sur­vey re­search has failed or that we can no longer pro­duce high-qual­ity, ac­cu­rate data. The prob­lem is that too many peo­ple are mis­us­ing and abus­ing polls — in three ways in par­tic­u­lar.

First, many peo­ple treat polls as pre­dic­tions in­stead of snap­shots in time based on a set of as­sump­tions about who will turn out to vote. Ron Fournier, the pub­lisher of Crain’s Detroit Busi­ness, for in­stance, has ar­gued that Nate Sil­ver got the elec­tion wrong be­cause he awarded Trump only a 34 per­cent chance of win­ning. Poll­sters make judg­ments about the com­po­si­tion of the elec­torate based on his­tor­i­cal ex­pe­ri­ence and lev­els of in­ter­est in the cur­rent elec­tion to pull a list of vot­ers to in­ter­view. But if those as­sump­tions are wrong, then the polls will be wrong on Elec­tion Day. The polls in the Mid­west that pre­dicted a Clin­ton vic­tory gen­er­ally did not an­tic­i­pate that, in key in­dus­trial states, more ru­ral and ex­ur­ban white work­ing­class vot­ers would vote than in past pres­i­den­tial con­tests.

The ten­dency of elites to un­der­es­ti­mate work­ing-class anger is a real and global prob­lem. The United States and most other ma­jor democ­ra­cies are grap­pling with in­tense and his­toric lev­els of pub­lic griev­ance re­lated to slow growth; in­come in­equal­ity; and re­sent­ments over trade, tech­nol­ogy and im­mi­gra­tion. That has made voter turnout among spe­cific blocs less pre­dictable world­wide. But that’s not a prob­lem with sur­vey re­search method­ol­ogy. Rather, it puts a big­ger premium on lis­ten­ing to vot­ers and pick­ing up on who is par­tic­u­larly an­gry or en­er­gized.

Sec­ond, the ris­ing cost of col­lect­ing high­qual­ity data — be­cause of de­clin­ing re­sponse rates and the in­creased use of cell­phones — has led many re­searchers to cut cor­ners. Rather than spend more to ad­dress such prob­lems, some or­ga­ni­za­tions skimp on prac­tices such as call-backs (to peo­ple who didn’t an­swer) or clus­ter sam­pling (to make sure small ge­o­graphic ar­eas are rep­re­sented pro­por­tion­ately). They may also use cheap and some­times un­re­li­able data-col­lec­tion meth­ods such as opt-in on­line pan­els or push-but­ton polling (in­ter­ac­tive voice recog­ni­tion) that sys­tem­at­i­cally ex­clude re­spon­dents who pri­mar­ily use mo­bile de­vices.

In­deed, ac­cord­ing to “Shat­tered,” the new book by Jonathan Allen and Amie Parnes, the Clin­ton cam­paign re­lied heav­ily on “an­a­lyt­ics” sur­veys rather than “old school polling” to track the can­di­date’s stand­ing be­cause the for­mer were cheaper. An­a­lyt­ics sur­veys are used to gather data for build­ing voter tar­get­ing mod­els. They tend to have large sam­ple sizes but skimp on com­mon prac­tices that make tra­di­tional polls more ac­cu­rate. The book quotes a Clin­ton poll­ster ac­knowl­edg­ing as much on elec­tion night: “Our an­a­lyt­ics mod­els were just really off. Time to go back to tra­di­tional polling.”

Third, good polling re­quires good lis­ten­ing. Pow­er­ful new tech­niques in big data mod­el­ing make it pos­si­ble to seg­ment and tar­get vot­ers in ways that were un­dreamed-of a decade ago. Yet vot­ing is an in­her­ently hu­man ac­tiv­ity that de­fies be­ing com­pletely re­duced to for­mu­las. The best polling has al­ways been ac­com­pa­nied by di­rectly lis­ten­ing to peo­ple, face to face, in their own words.

Many cam­paigns and media or­ga­ni­za­tions miss op­por­tu­ni­ties or suc­cumb to polling er­rors be­cause they do not in­vest in sim­ply lis­ten­ing to vot­ers. Fo­cus groups are in­valu­able, as are other ways of lis­ten­ing, such as con­duct­ing in-depth in­ter­views, read­ing on­line dis­cus­sion boards or even sys­tem­at­i­cally mon­i­tor­ing con­ver­sa­tions on so­cial media.

Open-ended lis­ten­ing can re­veal the need to re­word sur­vey ques­tions; for ex­am­ple, our re­cent fo­cus groups sug­gest that “glob­al­iza­tion” is all but mean­ing­less to many vot­ers. Open lis­ten­ing can cast doubt on things that may have be­come con­ven­tional wis­dom in a cam­paign; for in­stance, we have worked on many races where the “front-run­ner” was ac­tu­ally quite weak, but that was more ev­i­dent in fo­cus groups than in stan­dard sur­vey mea­sures of fa­vor­a­bil­ity or job per­for­mance. Di­rect lis­ten­ing can also show that not all polling num­bers are cre­ated equal: While we did not poll for last year’s Clin­ton cam­paign, we con­ducted many fo­cus groups across the coun­try in which it was clear that vot­ers were will­ing to overlook or tol­er­ate con­cerns about Trump, while they could not do the same with Clin­ton (e.g., “I just don’t trust her”). Di­rect lis­ten­ing re­vealed that low fa­vor­a­bil­ity rat­ings meant dif­fer­ent things for the two can­di­dates. Th­ese are qual­i­ta­tive tech­niques that many media polls and cam­paigns skip or skimp on, partly be­cause of the cost.

Polling had a good day in France two weeks ago, and with sound prac­tices, it could have an­other good day this week­end. Whether it has more good days, or in­stead in­creas­ingly be­comes a tar­get of skep­ti­cism, will de­pend less on math and more on old-fash­ioned mat­ters of hard lis­ten­ing, wise bud­get­ing and hu­man judg­ment.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.