National Post (National Edition)

Exploring the ‘credibilit­y chasm’

- STEPHEN GORDON

Think-tanks are an ever-present, yet somehow under-examined feature of the public policy landscape. Think-tanks get a lot of press, at least partly because they are adept at issuing press releases advertisin­g their work to the media, complete with pullquotes and readily available experts for radio and TV hits. Academic studies — the sort of work written by professors for professors — pass almost unnoticed, mainly because most of it is not relevant to current policy debates, and because peer-reviewed publicatio­ns are not so readily accessible. But is visibility the same thing as credibilit­y?

It would seem not. Carey Doberstein, a political scientist at the University of British Columbia, recently published a study in Canadian Public Policy on the credibilit­y gap — he calls it a “credibilit­y chasm” — between academic research and research published by think-tanks and advocacy organizati­ons. Interestin­gly, his study is not carried out among the general population, but among policy analysts in the provincial government­s of British Columbia, Saskatchew­an, Ontario and Newfoundla­nd and Labrador.

Participan­ts in the study were asked to read and evaluate the credibilit­y of different studies in two areas of provincial competence — minimum wages and incomespli­tting. The analysts were asked to evaluate a set of five or six studies produced by academics, think-tanks and advocacy groups. Doberstein very sensibly does not draw inferences about credibilit­y from these evaluation­s: one study is hardly enough to evaluate the credibilit­y of one group, or even of one researcher. He focuses instead on how the source of a study affects policy analysts’ perception­s of its credibilit­y.

Instead of sending the studies out to the analysts under their proper affiliatio­ns, Doberstein randomly altered them. For example, a study on the effects of an increase in the minimum wage written by researcher­s at the University of Toronto and published in a peer-reviewed journal was sent out with the correct affiliatio­n to one group of analysts, under the name of the Canadian Centre for Policy Alternativ­es (CCPA) to another group, and under the name of the Fraser Institute to yet another group. Similarly, in addition to being sent out under its own name to one group, a CCPA study would be sent out as a University of Toronto study to a different group, and represente­d as a Fraser Institute study to yet another set of analysts, and so on. Two advocacy groups, the Wellesley Institute and the Canadian Federation of Independen­t Businesses, rounded out the minimum wage exercise, and a similar mix of academic, think-tank and advocacy groups was used for the income splitting case.

This randomizat­ion strategy allows Doberstein to identify the reputation effects of the various sets of researcher­s: How is a study’s credibilit­y affected by its affiliatio­n? The answer is: pretty much in the way you’d expect. Adding a university affiliatio­n to a think-tank or advocacy group study increases analysts’ perception­s of its credibilit­y, while adding a think-tank or advocacy group’s name to an academic study makes it less credible. Generally, credibilit­y among policy analysts declines as you move from university affiliatio­ns to think-tanks to advocacy groups.

These results aren’t hard to explain. Policy analysts know full well that advocacy groups cannot be expected to publish anything that does not fit their stated agendas, so a study showing (once again!) that the data supports their previously-held position is not a particular­ly strong signal. Doberstein finds a similar effect among thinktanks: Think-tanks with a more stridently ideologica­l focus (CCPA, the Fraser Institute) are viewed as being less credible than the relatively neutral C.D Howe Institute.

Is this good news or bad? On the positive side, it shows that policy analysts are well aware of the incentives facing various sets of researcher­s, and know enough to put their work in context. On the downside, one might have hoped that analysts could set all that aside and evaluate the research on its own merits. Of course, that’s an ideal that almost no one can match: this is why so many academic journals use double-blind peer review, in which neither authors nor reviewers are identified to each other.

Perhaps the more interestin­g question is why advocacy groups and ideologica­llydriven think-tanks even bother to produce reports that are discounted so heavily by policy analysts. One answer might simply be that their reports aren’t written for the benefit of analysts; they’re written for the benefit of their donors. People like to have their beliefs confirmed, and they’re willing to pay to have someone tell them that they were (once again!) right.

This discussion also provides some insight into the challenges facing the media, particular­ly as it concerns the markets for news and opinion. Asking people to pay someone to tell them what they want to hear is a viable business model, and many digital outlets — from The Rebel through Canadaland to Rabble — are in the process of filling out that landscape. (It also raises the question of why the CBC would want to cut into this action with its CBC Opinion site. There’s no obvious market failure here that needs a public-sector fix.)

News, on the other hand, has the elements of a pure public good: everyone benefits from knowing the basic facts of what is going on, and technology has made it almost impossible to control access to news once it’s been published. Profits from advertisin­g revenues can no longer finance news gathering to the same extent that they used to, but academic researcher­s can still fall back on teaching to cross-subsidize their research work. If you really want to make an academic researcher sweat, ask her to imagine trying to make a living from her research alone.

Newspapers in English

Newspapers from Canada