Rules to en­cour­age AI

Re­cent de­vel­op­ments in ar­ti­fi­cial in­tel­li­gence have me won­der­ing.

Cosmos - - Contents -

MY SPINE STILL shiv­ers when I re­mem­ber the nu­clear stand-off be­tween the Soviet Union and the United States in 1962. As a nine-year-old I felt help­less in the face of two lead­ers poised to push the but­ton.

It was MAD – mu­tu­ally as­sured de­struc­tion – but san­ity pre­vailed and by the end of the 1960s we had dé­tente.

In the decades since I have felt com­fort­able with the daz­zling march of tech­nol­ogy that has re­duced global poverty, given us longer lives, de­liv­ered the in­for­ma­tion su­per­high­way and cre­ated my zero-emis­sions Tesla.

Yes, there are dis­ap­point­ments – the in­ter­net, for ex­am­ple, has not raised the cal­i­bre of con­ver­sa­tion but in­stead has cre­ated echo cham­bers of big­otry and fo­rums for lies and ha­rass­ment.

But now for the first time since the 1960s some­thing is tick­ling my worry beads: ar­ti­fi­cial in­tel­li­gence. I fear AI’S ca­pac­ity to un­der­mine our hu­man rights and civil lib­er­ties.

While AI has been in back­room devel­op­ment since the 1950s and in­creas­ingly im­ple­mented by busi­nesses and govern­ment in the past few years, I be­lieve 2018 will go down as the year the AI fu­ture ar­rived.

I am well aware of pre­vi­ous im­pres­sive de­vel­op­ments such as an AI named Al­phago beat­ing the world Go cham­pion, but I don’t play Go. I do, how­ever, rely on my ex­ec­u­tive as­sis­tant. So this year, when Google pub­licly demon­strated a dig­i­tal as­sis­tant named Du­plex call­ing a hair­dress­ing salon to make an ap­point­ment for its boss, speak­ing in a mel­low fe­male voice filled with hu­man pauses and col­lo­qui­alisms, I knew AI had ar­rived.

Shortly after­wards IBM demon­strated Project De­bater ar­gu­ing an un­scripted topic against a skilled hu­man. Some in the au­di­ence judged Project De­bater the win­ner.

The sim­plest def­i­ni­tion of AI is com­puter tech­nol­ogy that can do tasks that or­di­nar­ily re­quire hu­man in­tel­li­gence. More for­mally, AI is the com­bi­na­tion of ma­chine learn­ing al­go­rithms, big data and a train­ing pro­ce­dure. This mim­ics hu­man in­tel­li­gence: the com­bi­na­tion of in­nate abil­ity, ac­cess to knowl­edge and a teacher.

Also like hu­mans, when it comes to AI there are the good, the bad and the ugly.

The good: dig­i­tal as­sis­tants, med­i­cal AIS to di­ag­nose can­cer, satel­lite nav­i­ga­tion that fig­ures out the best way home and sys­tems that some­how know that your credit card has been used fraud­u­lently.

The bad: bi­ases such as that dis­cov­ered in the COMPAS risk-assess­ment soft­ware used to help judges in the US de­ter­mine a sen­tence by fore­cast­ing the like­li­hood of a de­fen­dant re­of­fend­ing. Af­ter two years of eval­u­a­tion COMPAS was found to have over­es­ti­mated re­of­fence rates for black de­fen­dants and un­der­es­ti­mated re­of­fence rates for white de­fen­dants. Ev­ery hu­man I know is bi­ased, so why worry when an AI is bi­ased? Be­cause there is a good chance it will be repli­cated and sold by the mil­lions, thus spread­ing the bias across the planet.

The ugly: think Or­well’s 1984. Now look at the so­cial credit score in China, where cit­i­zens are watched in the streets and mon­i­tored at home, los­ing points for lit­ter­ing or pay­ing their bills late, and as a con­se­quence be­ing de­nied a bank loan or their right to travel.

So how can we utilise the good but avoid the bad and the ugly? We must ac­tively man­age the in­te­gra­tion of AI into our hu­man so­ci­ety like we have done with elec­tric­ity, cars and medicines. Aus­tralia can lead the way, as we did for IVF by be­com­ing the first coun­try to col­late and re­port on birth out­comes and the first to pub­lish na­tional ethics guide­lines. To cap­ture the ben­e­fits and avoid the pit­falls re­quires a pub­lic dis­cus­sion. In July the Aus­tralian Hu­man Rights Com­mis­sion launched a project on hu­man rights and dig­i­tal tech­nol­ogy. In my key­note speech I fin­ished with the ques­tion: “What kind of so­ci­ety do we want to be?”

While the de­bate un­folds, here a few start­ing sug­ges­tions.

First, adopt a vol­un­tary, con­sumer-led cer­ti­fi­ca­tion stan­dard for com­mer­cial AI akin to the Fair­trade stamp for cof­fee. I call it the ‘Tur­ing Cer­tifi­cate’, in hon­our of Alan Tur­ing, the per­se­cuted fa­ther of AI. It won’t stop crim­i­nals and rogue states but it will help with the smart­phones and home as­sis­tants we choose to pur­chase.

Sec­ond, adopt the ‘Golden Rule’ pro­posed by the head of Aus­tralia’s Depart­ment of Home Af­fairs, Michael Pez­zullo: that no one should be de­prived of their fun­da­men­tal rights, priv­i­leges or en­ti­tle­ments by a com­puter rather than an ac­count­able hu­man.

Third, never for­get that AI is not ac­tu­ally hu­man. It is a tech­nol­ogy. We made it. We are in charge. Hence I pro­pose the ‘Plat­inum Rule’: that ev­ery AI should have an off switch.

I be­lieve 2018 will go down as the year the AI fu­ture ar­rived.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.