The REF should rank jour­nals

THE (Times Higher Education) - - NEWS - An­drew Ed­wards is head of the School of Hu­man and Life Sci­ences at Can­ter­bury Christ Church Uni­ver­sity. To­masina Oh is as­so­ciate dean of re­search at Ply­mouth Mar­jon Uni­ver­sity. Florentina Het­tinga is reader in the School of Sport, Re­ha­bil­i­ta­tion and Ex­erc

In 2014, more than 150 UK higher ed­u­ca­tion in­sti­tu­tions sub­mit­ted nearly 200,000 re­search out­puts and 7,000 im­pact stud­ies to the re­search ex­cel­lence frame­work (REF), at an es­ti­mated to­tal cost of nearly £250 mil­lion. Those over­all fig­ures are not ex­pected to be re­duced this time around, so what do we get for a quar­ter of a bil­lion pounds? How ef­fec­tive is the REF at assess­ing qual­ity?

The draft guid­ance on REF 2021 as­so­ci­ates qual­ity with orig­i­nal­ity, sig­nif­i­cance and rigour, but its grad­ing cri­te­ria re­main hazy and sub­ject to vari­a­tion be­tween units of assess­ment (UoAs). What counts as “world-lead­ing” orig­i­nal­ity, for in­stance? How closely can small pan­els of mul­ti­dis­ci­plinary re­view­ers ac­cu­rately de­ter­mine how far an out­put would need to fall be­low the “high­est stan­dards of ex­cel­lence” be­fore it is rated 3* in­stead of 4*?

Then there is the ques­tion of sam­ple size. In 2021, in­sti­tu­tions must sub­mit an av­er­age of 2.5 out­puts per aca­demic in a UoA over the seven-year qual­i­fi­ca­tion pe­riod. Com­pared with 2014’s re­quire­ment of four ar­ti­cles per aca­demic se­lected for sub­mis­sion, this is an in­clu­sive ap­proach in­tended to en­gage a wider pro­por­tion of the aca­demic com­mu­nity. How­ever, it is only a se­lec­tive snap­shot of pro­duc­tiv­ity for ac­tive re­searchers and may not fully dif­fer­en­ti­ate be­tween groups. More­over, such se­lec­tiv­ity seems un­nec­es­sary when mod­ern elec­tronic sys­tems are able to cope with huge datasets.

Each of the 34 assess­ment sub­pan­els con­sists of about 15 ex­perts. Based on 2014 sub­mis­sion fig­ures, each pan­el­list will need to re­view more than 700 out­puts over a few months, as­sum­ing each sub­mis­sion is as­sessed by two peo­ple. The im­pos­si­bil­ity of do­ing so with the ap­pro­pri­ate level of crit­i­cal in­sight is ex­ac­er­bated by the di­ver­sity of top­ics within each UoA, ren­der­ing par­tic­u­larly per­verse the in­struc­tion that pan­els must dis­re­gard jour­nal hi­er­ar­chies.

A decade ago, a study put the cost of jour­nal peer re­view­ing at £1.9 bil­lion a year. Although the ef­fi­cacy of the sys­tem is de­bated, it is a fun­da­men­tal prin­ci­ple of pub­li­ca­tion that assess­ment of pa­pers is un­der­taken by re­view­ers se­lected for their spe­cial­ist knowl­edge of the spe­cific topic in ques­tion. This is likely to be more rig­or­ous than the REF pan­els’ gen­er­al­ists are likely to man­age. Surely it would be a much bet­ter use of tax­pay­ers’ money to drop this du­pli­ca­tion and free up the pan­el­lists to fo­cus on high­erorder eval­u­a­tions, such as the co­her­ence of work and its im­pact.

Aus­tralia’s REF equiv­a­lent, known as Ex­cel­lence in Re­search for Aus­tralia, is a case in point. It re­cently closed its con­sul­ta­tion pe­riod for com­pil­ing the dis­ci­pline-spe­cific jour­nal rank­ings on which it largely re­lies to as­sess sci­en­tific sub­jects. These rank­ings do much more than ap­ply a sim­ple jour­nal im­pact fac­tor: they recog­nise the pres­tige of the pub­li­ca­tion with re­spect to each area of re­search, on the un­der­stand­ing that a jour­nal that is highly pres­ti­gious in one field may be less so in a neigh­bour­ing one.

The draft guid­ance on REF 2021 as­so­ci­ates qual­ity with orig­i­nal­ity, sig­nif­i­cance and rigour, but its grad­ing cri­te­ria re­main hazy and sub­ject to vari­a­tion be­tween units of assess­ment

The rank­ings make the plau­si­ble as­sump­tion that if a dis­ci­pline agrees that a par­tic­u­lar jour­nal car­ries a 4* rank­ing then most ar­ti­cles pub­lished therein will be of that qual­ity. Clearly there is no guar­an­tee of that in all cases but that doesn’t mat­ter at the macro level, par­tic­u­larly if the assess­ment takes in all out­puts pub­lished in the rel­e­vant pe­riod, rather than a REF-style sam­ple.

Apart from be­ing more trans­par­ent than the cur­rent REF method­ol­ogy, a fuller desk­top eval­u­a­tion of out­puts based on agreed sub­ject-spe­cific pub­li­ca­tion rank­ings could be car­ried out more fre­quently than ev­ery seven years. This would in­evitably give a truer in­sight into each re­search group’s pro­duc­tiv­ity rel­a­tive to its qual­ity, and pro­vide a stronger ba­sis for the dis­tri­bu­tion of re­search funds.

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.