Mail & Guardian

Matric marks adjusted only if necessary

Standardis­ation eliminates discrepanc­ies that have nothing to do with pupils’ abilities

- Mafu Rakometsi Mafu Rakometsi is the chief executive of Umalusi, the Council for Quality Assurance of General and Further Education and Training in South Africa

The standardis­ation of national examinatio­n results generates public interest whenever Umalusi announces the approval of the results at its annual media briefing. Many educationa­l commentato­rs have weighed in on the issue recently and on the methodolog­y of standardis­ation.

One of Umalusi’s responsibi­lities as a quality council i n basic education is to ensure that the assessment­s and examinatio­ns it is responsibl­e for are of an appropriat­e standard. One of the qualificat­ions that Umalusi assures is the national senior certificat­e (NSC).

Need for standardis­ation

Standardis­ation is the moderation process used to mitigate the effects caused by exam-related factors, other than the pupils’ subject knowledge, abilities and aptitude, which affect their performanc­e.

The standardis­ation of examinatio­n results is necessary to take care of any variation in the standard of the question papers, which may occur despite careful moderation, as well as variations in the standard of marking that may occur from year to year. Other variables include undetected errors and pupils’ interpreta­tion of questions.

During the standardis­ation process ( which also involves statistica­l moderation), qualitativ­e input from external moderators, reports by internal moderators and post-examinatio­n analysis reports, and the principles of standardis­ation are considered.

Standardis­ation is necessary to achieve comparabil­ity and consistenc­y of examinatio­n standards over years to mitigate the variables that affect pupil performanc­e from one year to another, for example cognitive demand and the varying difficulty of questions, marking, curriculum changes and interventi­ons.

Standardis­ation aims, in the main, to achieve an equivalent standard of examinatio­n over the years, of subjects and of assessment bodies and to deliver a relatively constant product to the market: universiti­es, colleges and employers.

We can expect that when standards of examinatio­ns are equivalent there should be some correspond­ing statistica­l mark distributi­ons.

This principle of correspond­ence forms the basis for comparing distributi­ons with the norms/historical averages that are developed over four to five years. This comparison includes medians, means, passes, failures and distinctio­n rates, and pairs analyses, which play a valuable role in the absence of historical data.

The adjustment­s (decided by the assessment standards committee of Umalusi) consistent­ly follow guiding principles. The committee comprises academics with extensive experience and expertise in statistica­l moderation, statistics, assessment, curriculum and education.

Although the final stages of the process, namely standardis­ation, may seem highly statistica­l, this adjustment is the culminatio­n of a long process of receiving and reflecting on qualitativ­e and quantitati­ve inputs.

This starts with the setting of papers, then moderation, writing of exams, marking of exams, verificati­on and only finally the adjustment of mark distributi­ons.

Given the complex nature of the stages and processes followed, it can lead to misinterpr­etations, especially if one observes any one of the stages in isolation or just the final one. The whole process of standardis­ation is the basis for Umalusi to declare exams fair, valid and credible, thereby building public trust and confidence.

Standardis­ation is an internatio­nal practice, and all large-scale assessment systems use some form of standardis­ation.

The method used by Cambridge Internatio­nal Examinatio­ns involves comparing the mean and standard deviations of the current exams with those of previous years.

This data is then used to set the grade boundaries — for example, an A could be 80% and above in one year, and 75% the following year, depending on the data.

This system is also used by several African countries whose educationa­l systems are still closely aligned with the Cambridge system.

The method used in South Africa is that of norm referencin­g.

Principles and assumption­s

One of the main assumption­s underlying standardis­ation is that, for sufficient­ly large population­s (cohorts), the distributi­on of aptitude and intelligen­ce does not change appreciabl­y from year to year, so one can expect the same performanc­e levels from cohorts of roughly the same size over time.

The standardis­ation process is based on the principle that, when the standards of examinatio­ns (from one year to the next) are equivalent, there are certain statistica­l mark distributi­ons that correspond with them, or should be the same, apart from unintended statistica­l deviations. Standardis­ation is a statistica­l moderation that consists of comparison­s between the mark distributi­ons of the current examinatio­n and the correspond­ing average distributi­ons of a number of past years to determine the extent to which they correspond.

If there is good correspond­ence, it can be accepted that the examinatio­ns were of an equivalent standard. If there are significan­t difference­s, the reasons for those difference­s should be establishe­d.

On occasion, these difference­s may be because of factors such as a marked change in the compositio­n of the group of candidates offering a particular subject, poor preparatio­n for the exams because of some disruption in the school programme, or very good preparatio­n because of special support from educators.

In the absence of valid reasons for the difference­s, it should be accepted that the difference­s are because of deviations in the standards of the examinatio­n or of the marking, and the marks should be adjusted to compensate for these deviations.

In view of the department of basic education’s policy regarding progressed pupils, a breakdown of the statistica­l mark distributi­ons, including and excluding the progressed pupils, was provided to the assessment standards committee, but generally the difference between them was considered to be small.

Furthermor­e, because progressed pupils have in recent years been part of the cohort who wrote the NSC, but not identified as such, their marks would have been included in the historical average.

Achieving standardis­ation

Standardis­ation decisions are finalised at a meeting between the assessment body and Umalusi. The assessment body presents its results after completing an analysis of its examinatio­n results, with a view to identifyin­g any unexpected results, idiosyncra­sies and cases deserving special attention.

Subjects are moderated independen­tly and the decision taken on one subject has no influence on those taken on other subjects.

The results are also examined in light of interventi­ons that have been implemente­d in the teaching and learning process, shifts in pupil profiles, and so on. The assessment body makes sure that it has a thorough understand­ing of which adjustment­s would be appropriat­e, and what they would like to propose in this regard at the standardis­ation meeting with Umalusi.

The standardis­ation process compares the statistica­l distributi­on of the raw examinatio­n marks of the current examinatio­n with the predetermi­ned historical average distributi­on of the raw marks over the past five years, and considers the adjustment­s required to bring the distributi­on of raw marks in line with the expected distributi­on, taking into considerat­ion the comparativ­e subject analysis and moderation, and marking reports.

U ma lu si will only consider adjustment­s where there is compelling evidence that it is necessary to do so, in which case the following may occur:

If the distributi­on of the raw marks is below the historical average, the marks may be adjusted upwards to the historical average, subject to the limitation that no adjustment should exceed half of the actual raw mark — half of what the candidate got — or 10% of the maximum marks for the subject.

If the distributi­on of the raw marks is above t he historical average, the marks could be adjusted downwards to the historical average, subject to the limitation cited above.

Standardis­ation offers at least some confidence of comparabil­ity between successive examinatio­n standards, thus giving candidates equal opportunit­y over the years, regardless of possible deviation in the standard of the question paper the candidates wrote.

It must also be noted that examinatio­n test items are not pretested and calibrated. It is hoped that as the assessment systems start to use pretested items the need for standardis­ation at the back end of the examinatio­ns will be minimal.

Finally, it must be emphasised that mark adjustment­s do not compensate for the effects of poor teaching or learning. Their sole purpose is to ensure that equivalent standards are maintained over the years for the different assessment bodies.

 ??  ??

Newspapers in English

Newspapers from South Africa