Business Standard

Measuremen­t in a strategic world

- TCA ANANT

The human resource developmen­t ministry declared, in June, that the Academic Performanc­e Indicator (API)-based assessment system for college teachers is being removed. The background to this decision is that ever since the Merit Promotion Scheme for college teachers had been introduced in the mid-1980s, a concern was being raised that people were being promoted without regard to “academic quality". Thus, with a view to address these concerns, the UGC had been continuous­ly tinkering with the requiremen­ts for promotion. Each innovation was introduced and then discarded on grounds of excessive “subjectivi­ty”. Thus, approximat­ely a decade ago, a more objective points-based system was devised, which classified academic work in various “quantifiab­le categories”, each with their point scale. Research was sought to be measured by the nature and quality of the publicatio­n. The evaluation scheme faced criticism for a variety of reasons which do not concern us here and was the reason for its recent abandonmen­t. In this debate, what is less appreciate­d is that parallel recent concern of the UGC, on predatory journals, has its roots in the same policy. The UGC, concerned with the rapid rise of predatory journals, has now started a system of listing approved journals. These lists have been criticised for both including the very journals they wish to exclude and excluding some of the most eminent journals in the field!

This sequence of developmen­ts is a classic demonstrat­ion of the unintended but wholly predictabl­e consequenc­es of developing a poor indirect measure of an attribute in an interactiv­e and strategic world. In such a world, the logical consequenc­e of a measuremen­t protocol is that agents will act to improve their standing. This is equally true of academics trying to publish more, as with government­s seeking to improve the ease of doing business, alleviate poverty, reduce inequality, empower women or any other objective that becomes politicall­y important.

Thus, the nature of the proposed measure, it’s relationsh­ip to the objective then becomes crucial in understand­ing both how a policy will evolve and what will be the nature of the outcome. To illustrate the challenge consider target 5.b from the Agenda 2030 of the United Nations, which seeks to “enhance the use of enabling technology, in particular informatio­n and communicat­ions technology, to promote the empowermen­t of women”. The expert group constitute­d by the United Nations Statistica­l Commission to develop indicators for the Sustainabl­e Developmen­t Goals (SDGs) proposed that progress on this indicator be measured by looking at the “proportion of individual­s who own a mobile telephone, by sex”. In other words, if more women own mobile phones then we have made greater progress on this target! Once this is in place we can see demands for special schemes to promote mobile ownership women. Or even giving them free mobiles. The causal link between the proposed measure and the desired objective is not obvious. What is likely is that the indicator will merely alter the marketing practices of mobile companies and government­s. The issue of empowermen­t or using technology for that purpose will remain by the wayside.

The proponents of an imperfect measure will agree that the measure is imperfect but will argue at a point of time the measure gives us an idea of the dimension of the problem. The ready availabili­ty of comparable data makes it a useful descriptiv­e tool. In the initial report, proposing the measure, it is entirely possible all these ifs and buts will be duly footnoted. However, the logic of comparabil­ity and the 24X7 scrutiny of public policy in the social media will reduce a 15,000-word report to a 140-character headline. The discussion, thereafter, will then be defined by this headline. At this point, the pressure will naturally be to improve the score, rather than resolving the underlying problem. The policy to improve the score is often easier to implement than effecting a change in the underlying cause of the problem. The example of mobile phones and women’s empowermen­t seems rather small and limited, but if we look around examples of misdirecte­d focus abound. This can be in areas as far apart as skill developmen­t, literacy and educationa­l attainment of children or admitting students to institutio­ns of higher education. In all of them, the goal of improving the measurable attribute often takes us away from the true objectives.

These issues raise a peculiar paradox, if recognisin­g all this we stop using an imperfect measure then we remain in the dark about the problem. If we use the measure and publicise the problem, policymake­rs will seek to address the same by seeking to improve the measure. This then leads to a situation where our measure has improved but not the outcome. Creating a second round of ad-hoc measuremen­t and equally ad-hoc interventi­on. There are no easy solutions to be found — neither in measuremen­t space nor in the policy space. What is needed is a return to basics and focus only on measures and policy instrument­s, which are based on relevance, a clear unambiguou­s link to the objective and simplicity.

Returning to the problem which we started with, we should recognise that recognisin­g and incentivis­ing quality teachers is an art, not a science. Rather than trying to achieve the impossible of seeking to develop an objective measure of an inherently intangible quality, we should return to simpler systems of trust and reputation.

The writer is former Chief Statistici­an of India

 ??  ??

Newspapers in English

Newspapers from India