Big data is blind faith

Campaign Middle East - - CONTENTS -

In 1952, the Bos­ton Sym­phony Orches­tra was wor­ried about fall­ing stan­dards due to nepo­tism. They thought con­duc­tors were choos­ing their own stu­dents over the best mu­si­cians.

So they de­cided au­di­tions would take place with a cur­tain be­tween the con­duc­tor and ap­pli­cants.

If the con­duc­tor couldn’t see who was play­ing, they could only judge on abilit y. But the re­sults were dis­ap­point­ing. Pretty much the same young men were picked again. So the mu­si­cians were asked to re­peat the au­di­tion, but take their shoes off f irst. When they did, the re­sults were very dif­fer­ent. This time, half the selected mu­si­cians were fe­male – pre­vi­ously, there had been hardly any women cho­sen.

They thought they were be­ing fair by not see­ing the ap­pli­cants but, sub­con­ciously, they could still tell their sex by the sound of their shoes.

What they were lis­ten­ing to wasn’t the mu­sic but their own sub­con­scious bias.

Since 1952, blind au­di­tions have be­come com­mon and half of the top 250 or­ches­tras are now largely com­posed of fe­male mu­si­cians.

Sub­con­cious bias also plays a large part in our era of faith in big data and al­go­rithms. Alongside an­other bias: quan­tifi­ca­tion bias. This is the be­lief in valu­ing the mea­sur­able over the im­mea­sur­able.

Cathy O’Neil is a math­e­ma­ti­cian and data sci­en­tist; she wrote Weapons of Math De­struc­tion.

She says: “Al­go­rithms don’t make things fair, they re­peat past prac­tices – they au­to­mate the sta­tus quo.”

She says the rea­son for this is “Al­go­rithms are sim­ply opin­ions em­bed­ded in code.

“Peo­ple think al­go­rithms are ob­jec­tive, true and sci­en­tific – but this is a mar­ket­ing trick.

“Peo­ple trust and fear al­go­rithms be­cause they trust and fear math­e­mat­ics.”

She sum­marises: “Al­go­rithms are not ob­jec­tive – the peo­ple who build them im­pose their own agenda on the al­go­rithms.”

Tri­cia Wang is an alumna of Har­vard’s Berk­man Klein Cen­tre for In­ter­net & So­ci­ety.

She says: “Re­ly­ing on big data alone in­creases the chance that we’ll miss some­thing by giv­ing us the il­lu­sion that we know ev­ery­thing.”

She ad­dresses the ques­tion: why is big data not help­ing us make bet­ter de­ci­sions?

She says: “Big data suf­fers from a con­text loss be­cause big data doesn’t an­swer the ques­tion ‘ why?’”

Big data is a $122bn in­dus­try in the US, where Wang ad­vises com­pa­nies on the use of tech­nol­ogy.

She says: “Al­go­rithms need to be au­dited, be­cause quan­ti­fy­ing is ad­dic­tive.

“Peo­ple have be­come so f ix­ated on num­bers that they can’t see any­thing out­side of it.”

That seems to be the prob­lem with big data and al­go­rithms in gen­eral.

As O’Neil says, “An al­go­rithm is just data plus a def­i­ni­tion of suc­cess.

“The data is gath­ered from the past, and what­ever data is used is de­cided by the per­son build­ing the al­go­rithm. “As is the def­i­ni­tion of suc­cess.” So, far from be­ing an ob­jec­tive mea­sure, an al­go­rithm is sub­jec­tiv­ity plus more sub­jec­tiv­ity.

The data used isn’t de­cided by a ma­chine; nei­ther is the def­i­ni­tion of suc­cess. Both are de­cided by f lawed, bi­ased hu­man be­ings. There’s noth­ing wrong with be­ing bi­ased – we all are.

The only thing that’s wrong is not be­ing aware of the bias, and not ad­mit­ting it.

Be­cause the re­sults of the al­go­rithms are cranked out by a ma­chine, we think those hid­den bi­ases are facts.

Newspapers in English

Newspapers from UAE

© PressReader. All rights reserved.