An al­go­rithm isn’t needed to fig­ure out this far­rago

The A-level scan­dal has brought is­sues of com­puter bias into the main­stream. We need an AI tsar to fix this so­cially toxic mess

The Daily Telegraph - Business - - Business Comment - Robin Pag­na­menta

Roger Taylor scuffed his A-lev­els, but at least he ad­mits it was his own fault. “I did rather badly for var­i­ous rea­sons,” Ofqual’s em­bat­tled chair­man ad­mit­ted to an in­ter­viewer last year. “Lack of ap­pli­ca­tion. A sense of ‘What’s the point of it all?’ And also a sort of in­de­ci­sion about what I wanted to do.”

Luck­ily for Mr Taylor, those dis­ap­point­ing exam re­sults weren’t too much of a set­back. He still man­aged to se­cure for him­self a place at Ox­ford Uni­ver­sity to study PPE.

But for the hun­dreds of thou­sands of Bri­tish schoolchil­dren whose A-level grades were down­graded by Ofqual’s al­go­rithms last week, that is not a feel­ing they are likely to share.

In­stead, Taylor seems to have clum­sily stum­bled into the midst of Bri­tain’s big­gest scan­dal yet over al­go­rith­mic bias – an is­sue that has ex­ploded into the main­stream. Min­is­ters may have now opted for an about turn but ei­ther way this was a de­bate that was ur­gently re­quired.

An ob­scure ar­ti­cle of EU data pro­tec­tion law that cov­ers de­ci­sions based on al­go­rithms says in­di­vid­u­als “shall have the right not to be sub­ject to a de­ci­sion based on au­to­mated pro­cess­ing in­clud­ing pro­fil­ing” if it has a le­gal ef­fect on them. Up un­til now, there has been rel­a­tively lit­tle scru­tiny of the al­go­rithms that in­creas­ingly gov­ern our lives.

From credit checks and in­sur­ance pre­mi­ums to the al­go­rithms used by fast food chains such as McDon­ald’s to sell us more ham­burg­ers, the quiet but steady creep of AI into ev­ery facet of our ev­ery­day ex­is­tence has gen­er­ated a new set of ques­tions over ethics, bias and fair­ness.

But while some poor AI de­ci­sion-mak­ing is un­likely to cause too much up­set, in oth­ers it is so­cially toxic.

So it is with the scan­dal over al­go­rith­mic A-level mark­ing, which at a stroke has cat­a­pulted this for­merly fringe, nerdy is­sue into mil­lions of Bri­tish liv­ing rooms.

Al­go­rithms – com­plex equa­tions used by com­put­ers to make de­ci­sions – may be fiendishly com­plex but the prob­lem with re­ly­ing on them too much to make de­ci­sions like this is star­tlingly sim­ple.

The judg­ments they reach are only as good as the data that they are based upon – which is of­ten ei­ther in­com­plete, in­cor­rect or sub­ject to the same hu­man bias that per­vades so­ci­ety at large.

In this case, re­ly­ing on the use of post­code data and his­toric school exam grades rather than the judg­ments of real-life hu­man teach­ers who know in­di­vid­ual pupils, seems like an easy way to ex­ac­er­bate and em­bed ex­ist­ing so­cio-eco­nomic bias, to favour weak can­di­dates at bet­ter and smaller schools while dis­crim­i­nat­ing against strong pupils from oth­ers that have his­tor­i­cally un­der­per­formed.

To be fair, it’s true that Taylor and his team at Ofqual faced a tough task be­ing forced to rapidly craft an ef­fec­tive exam-mark­ing sys­tem dur­ing the pan­demic, af­ter the can­cel­la­tion of tests.

But given the huge im­por­tance of these re­sults to the fu­ture prospects of mil­lions of young peo­ple, how rig­or­ous was the ef­fort to en­sure the sys­tem was fair? What sort of ex­ter­nal over­sight ex­isted to mon­i­tor the process? And above all, who was reg­u­lat­ing the al­go­rithms that were be­ing ap­plied?

Per­haps this is the time to ask whether, in­stead of try­ing to reg­u­late the big tech com­pa­nies, it may be more use­ful to reg­u­late the al­go­rithms be­ing ap­plied to de­ci­sion-mak­ing? In other words – and for want of a bet­ter phrase – do we need an “al­go­rithm tsar” to help pre­vent this kind of thing hap­pen­ing again?

The person or agency would be re­spon­si­ble for en­sur­ing al­go­rithms were fair and free of bias, lay­ing down ground-rules and would be em­pow­ered to ap­prove or re­ject poorly-de­signed schemes be­fore they were used.

Af­ter all, this is a highly spe­cial­ist field of knowl­edge. Within the Gov­ern­ment, only lim­ited ex­per­tise ex­ists, of­ten forc­ing civil ser­vants to rely on third party providers who may have lit­tle vested in­ter­est in the con­se­quences of their work.

It’s no se­cret that there is a chronic short­age of skilled pro­fes­sion­als in AI. Ten­cent, the Chi­nese tech gi­ant, says there are only 300,000 skilled AI engi­neers glob­ally where mil­lions are needed.

That means salaries for skilled work­ers have surged in re­cent years, of­ten placing them out­side the pool of peo­ple gov­ern­ment de­part­ments can re­al­is­ti­cally seek to re­cruit.

Taylor re­ally should have known bet­ter than to pre­side over this fi­asco.

As well as be­ing the chair­man of Ofqual, he is also the chair­man of the Cen­tre for Ethics and Data In­no­va­tion, an in­de­pen­dent ad­vi­sory body to the UK gov­ern­ment ad­dress­ing the ben­e­fits and risks of AI. If any or­gan­i­sa­tion was qual­i­fied to kick the tyres of Ofqual’s al­go­rithms, this was it. Yet so far, per­haps un­sur­pris­ingly, the CDEI has been silent on the exam sham­bles.

In fact, Taylor him­self was strangely pre­scient about the kind of threat posed by al­go­rith­mic de­ci­sions of this kind.

In a CDEI re­port last year he wrote: “Ar­ti­fi­cial in­tel­li­gence and al­go­rith­mic sys­tems can now op­er­ate ve­hi­cles, de­cide on loan ap­pli­ca­tions and screen can­di­dates for jobs. The tech­nol­ogy has the po­ten­tial to im­prove lives and ben­e­fit so­ci­ety, but it also brings eth­i­cal chal­lenges which need to be care­fully nav­i­gated if we are to make full use of it.”

Quite so.

‘At a stroke the fi­asco has cat­a­pulted this nerdy is­sue into mil­lions of homes’

Newspapers in English

Newspapers from UK

© PressReader. All rights reserved.