Alicia Some­times

Cosmos - - Contents -

Aus­tralian AI pol­icy ex­pert Ellen Broad ar­gues that it’s not the pro­grams that are flawed, but the hu­mans who make them. She speaks to CONOR PUR­CELL.

AR­TI­FI­CIAL IN­TEL­LI­GENCE ( AI) has been lauded as the holy grail of hu­man in­no­va­tion, a po­ten­tial so­lu­tion for hu­man prob­lems rang­ing from the mun­dane to the ab­so­lutely astounding. But in­creas­ingly com­puter sci­en­tists are having to re­spond to de­mands from crit­ics to strengthen the moral frame­work around the emerg­ing tech­nol­ogy, and to seek out a fairer fu­ture.

Ques­tions such as “what if AI dis­crim­i­nates along so­cial classes?” and “what if govern­ment-de­ployed ma­chines are bi­ased against cer­tain groups?” are now be­ing asked. One key prob­lem is that th­ese sys­tems are de­vel­oped by hu­mans, and based on hu­man-gen­er­ated data. Their func­tion­al­ity is in­her­ently de­pen­dent on the data a ma­chine or net­work is trained on, and there­fore by the so­ci­etal bi­ases man­i­fest in their de­sign­ers’ minds. There are al­ways peo­ple be­hind the ma­chine.

In Made by Hu­mans, a new book by Ellen Broad, head of pol­icy for Aus­tralian in­de­pen­dent com­put­ing think tank, the Open Data In­sti­tute, many of the emerg­ing is­sues around AI are brought to the fore. The book takes a look at the so­ci­etal is­sues around cre­at­ing and im­ple­ment­ing AI, delv­ing into se­ri­ous prob­lems such as fair­ness and open­ness, and how Aus­tralia’s fed­eral govern­ment is al­ready rolling out controversial ma­chine-learn­ing strate­gies.

“For sure, some AI projects have been re­ally as­ton­ish­ing in their ac­com­plish­ments,” says Broad. “But oth­ers have been re­ally brit­tle in their as­sem­blage. Peo­ple have no way of know­ing how the im­ple­men­ta­tion of this in­tel­li­gence will af­fect their lives, and one of the ma­jor prob­lems is that most peo­ple can’t es­tab­lish which is which – sep­a­rat­ing the good from the bad.”

Broad’s key argument is that the AI rev­o­lu­tion is fun­da­men­tally flawed be­cause AI it­self is a hu­man con­struct – made by hu­mans, for hu­mans, and based on data col­lected about hu­mans. She be­lieves this has se­ri­ous po­ten­tial for sup­port­ing bi­ases in so­ci­ety and help­ing to feed dis­crim­i­na­tion.

“AI is flawed in the same way that hu­mans are flawed,” she says. “It can’t help but learn from data which has been gen­er­ated by peo­ple. There’s no al­ter­na­tive data, and that’s a re­ally cru­cial prob­lem.”

Broad ar­gues that AI could fail at many tasks be­cause it

is dif­fi­cult to col­lect an ac­cu­rate data­base about any­thing, es­pe­cially hu­man ac­tiv­ity. For ex­am­ple, data col­lected about us re­flects our on­line behaviour, and even if our time spent on­line is in­creas­ing, datasets can nei­ther re­flect those sub­tle mo­ments of­fline which re­ally de­fine who we are, nor our true daily thoughts which re­main hid­den even from those clos­est to us.

“A lot of the time al­go­rith­mic bias may not even be in­ten­tional,” she says. “It’s not like some Machi­avel­lian force at work, but rather more likely be­cause of in­her­ent bi­ases in the de­sign­ers’ minds.”

“We have mi­nori­ties and ma­jori­ties in our so­ci­eties, so there’s al­ready a prob­lem when de­sign­ing AI. Who are we de­sign­ing for?” she asks. “And even with data that is not about hu­mans, like me­te­o­rol­ogy or chem­istry, for ex­am­ple, the in­stru­ments used to gen­er­ate the data have been cre­ated by hu­mans.”

Th­ese in­her­ent ma­chine bi­ases will lead to prob­lems for un­der­rep­re­sented groups across so­ci­eties, from the phys­i­cally im­paired and men­tally ill, to mi­nor­ity na­tion­als and women. AI can be mis­used or lever­aged in any di­rec­tion for what­ever pur­poses its de­signer pleases.

Ac­cord­ing to Broad, the out­puts of th­ese sys­tems can have a very real im­pact on peo­ple’s lives.

“At a pol­icy level we’ll need to quan­tify the ef­fects of the de­ci­sions that are be­ing made,” she says. “The ques­tion is this: if we know that there is go­ing to be a bias in an AI out­put, can we change the al­go­rithms to en­cour­age fair­ness, with­out having to change the un­der­ly­ing bi­ases in so­ci­ety which in­evitably ex­ist?”

In Aus­tralia the fed­eral govern­ment has be­gun to au­to­mate sys­tems in­clud­ing debt re­cov­ery from wel­fare re­cip­i­ents, datadriven drug test­ing of wel­fare re­cip­i­ents, and tools to pre­dict which de­tainees are most at risk of vi­o­lence in de­ten­tion cen­tres. The govern­ment is also in­vest­ing in a ma­chine-learned na­tional fa­cial recog­ni­tion data­base.

“Aus­tralia is ex­per­i­ment­ing with data in lots of ways,” says Broad. “Na­tional AI ranges from so­phis­ti­cated sta­tis­ti­cal anal­y­sis to real ma­chine-learn­ing where com­put­ers are trained on govern­ment data.

“One controversial idea which has been pro­posed is to ap­ply ma­chine learn­ing meth­ods to waste­water data where trace amounts of metham­phetamines are present, with the aim to tar­get par­tic­u­lar sites for drug users, and make de­ci­sions about so­cial wel­fare.”

Through­out his­tory gov­ern­ments have used data and tech­nol­ogy to de­velop strate­gies against peo­ple in need, in­clud­ing mi­nori­ties, the sick and the poor. In this way the most marginalised peo­ple are of­ten the most neg­a­tively af­fected by tech­no­log­i­cal change. So, as AI sys­tems are in­creas­ingly rolled out across Aus­tralia and the rest of the world, more trans­parency will be re­quired to re­move prej­u­dice and pro­mote fair­ness and equal­ity.

Broad be­lieves that cit­i­zens need a way to be able to know how they are be­ing as­sessed. “I think we should have trans­parency around the method­olo­gies upon which AI sys­tems are built, as well as the data they are trained on,” she says. “Cit­i­zens need a way to know about that.”

Reg­u­la­tions such as the re­cent Gen­eral Data Pro­tec­tion Reg­u­la­tion (GDPR) in­tro­duced this year in Europe are a step to­wards on­line pro­tec­tion and pri­vacy. But as for the reg­u­la­tion of ma­chine learn­ing and AI, legally bind­ing mech­a­nisms only ex­ist pe­riph­er­ally. “We have hu­man rights laws which re­strict forms of dis­crim­i­na­tion,” Broad ex­plains. “But what we don’t re­ally have are pur­pose-built mech­a­nisms for scru­ti­n­is­ing AI sys­tems. That’s partly be­cause the tech­nol­ogy has changed so fast.”

Time will tell whether we are able to build fair AI sys­tems in the fu­ture, but se­ri­ous chal­lenges re­main, main, some of which are out of the con­trol of com­puter sci­en­tists. “There is no uni­ver­sal idea of fair since there are so many dif­fer­ent per­spec­tives across so­ci­ety: what’s fair for me may not be fair for you,” con­cludes Broad. “We need to work to­wards fairer so­ci­eties.”

Made By Hu­mans: The AI Con­di­tion, by Ellen Broad, is pub­lished by Mel­bourne Univer­sity Press. RRP $29.99

CONOR PUR­CELL is a science jour­nal­ist with a PHD in Earth Science. He is the found­ing ed­i­tor of www.wide­or­bits.com

IMAGES 01 Phon­la­maiphoto / Getty Images 02 Mel­bourne Univer­sity Press

01

02 | Ellen Broad

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.