Ethics, Ar­ti­fi­cial In­tel­li­gence and Reg­u­la­tion

Ethics is a term de­rived from the Greek word ethos which can mean cus­tom, habit, char­ac­ter or dis­po­si­tion. It is an in­trin­sic re­quire­ment for hu­man life and our means of de­cid­ing a course of ac­tion. At its sim­plest, it is a sys­tem of moral prin­ci­ples also

The Malta Independent on Sunday - - DEBATE & ANALYSIS - Ian Gauci

Codes of ethics have al­ways played an im­por­tant role in many sciences. Such codes aim to pro­vide a frame­work within which re­searchers can un­der­stand and an­tic­i­pate the pos­si­ble eth­i­cal is­sues that their re­search might raise, and to pro­vide guide­lines about what is, and is not, re­garded as eth­i­cal be­hav­iour. The late Pro­fes­sor Hawkins opined that new emerg­ing tech­nolo­gies in­clud­ing Ar­ti­fi­cial In­tel­li­gence (AI) open a new fron­tier for ethics and risk as­sess­ment.

The In­sti­tute of Elec­tri­cal and Elec­tron­ics Engi­neers (IEEE) has been work­ing on this front for some time, and to this end pub­lished a re­port en­ti­tled “Eth­i­cally Aligned De­sign: A Vi­sion for Pri­or­i­tiz­ing Hu­man Well-be­ing with Ar­ti­fi­cial In­tel­li­gence and Au­tonomous Sys­tems.”

The Asilo­mar AI prin­ci­ples build on ethics and value which in turn un­der­pin prin­ci­ples cap­tur­ing el­e­ments of safety, fail­ure and ju­di­cial trans­parency, value align­ment in the de­sign and com­pat­i­bil­ity with hu­man val­ues and pri­vacy.

The UK House of Lords pub­lished a re­port on 16th April 2017 en­ti­tled “Ar­ti­fi­cial In­tel­li­gence Com­mit­tee AI in the UK: ready, will­ing and able? Re­port of Ses­sion 2017-19”. This re­port high­lights the im­por­tance of ethics and builds its five prin­ci­ples which would marry with the Asilo­mar AI prin­ci­ples and which read as fol­lows:

(1) Ar­ti­fi­cial in­tel­li­gence should be de­vel­oped for the com­mon good and ben­e­fit of hu­man­ity.

(2) Ar­ti­fi­cial in­tel­li­gence should op­er­ate on prin­ci­ples of in­tel­li­gi­bil­ity and fair­ness.

(3) Ar­ti­fi­cial in­tel­li­gence should not be used to di­min­ish the data rights or pri­vacy of in­di­vid­u­als, fam­i­lies or com­mu­ni­ties.

(4) All cit­i­zens have the right to be ed­u­cated to en­able them to flour­ish men­tally, emo­tion­ally and eco­nom­i­cally along­side ar­ti­fi­cial in­tel­li­gence.

(5) The au­tonomous power to hurt, de­stroy or de­ceive hu­man be­ings should never be vested in ar­ti­fi­cial in­tel­li­gence.

There is no novelty be­ing pro­posed here but the re­port does not stop with these prin­ci­ples. The re­port seems to stress that AI should not be reg­u­lated at this junc­ture. Such a read­ing would stem from para­graph 375, which quotes the Law So­ci­ety UK stat­ing that there is no ob­vi­ous rea­son why AI would re­quire fur­ther leg­is­la­tion or reg­u­la­tion and that AI is still rel­a­tively in its in­fancy and it would be ad­vis­able to wait for its growth and de­vel­op­ment to bet­ter un­der­stand its forms. In para­graph 373 em­i­nent aca­demics like Prof Robert Fisher et al said: “Most AI is em­bed­ded in prod­ucts and sys­tems, which are al­ready largely reg­u­lated and sub­ject to li­a­bil­ity leg­is­la­tion. It is there­fore not ob­vi­ous that wide­spread new leg­is­la­tion is needed.”

The re­port how­ever seems to im­ply that a form of leg­isla­tive in­ter­ven­tion might be re­quired in other chap­ters. Let’s park this for a sec­ond and briefly men­tion one piece of leg­is­la­tion, the Gen­eral Data Pro­tec­tion Reg­u­la­tion (GDPR), which was quoted as well in the re­port and which will be ap­pli­ca­ble in this am­bit. In sum­mary, the GDPR pro­vides that when per­sonal data is pro­cessed it should be pro­cessed in a law­ful, fair and trans­par­ent man­ner. It is col­lected for spe­cific, ex­pressly stated and jus­ti­fied pur­poses and not treated in a new way that is in­com­pat­i­ble with these pur­poses. It is cor­rect, up­dated, ad­e­quate, rel­e­vant and lim­ited to what is nec­es­sary for ful­fill­ing the pur­poses for which it is be­ing pro­cessed, not stored in iden­ti­fi­able form for longer pe­ri­ods than is nec­es­sary for the pur­poses, and pro­cessed in a way that en­sures ad­e­quate per­sonal data pro­tec­tion. Any al­go­rithm would need to be coded keep­ing all these cri­te­ria in mind, thus fol­low­ing the man­dated data pro­tec­tion by de­sign prin­ci­ples. Aside from this, the GDPR also pro­vides for data pro­tec­tion im­pact as­sess­ments (DPIAs), in­tended as a tool to help or­gan­i­sa­tions iden­tify the most ef­fec­tive way to com­ply with their data pro­tec­tion obli­ga­tions and meet in­di­vid­u­als’ ex­pec­ta­tions of pri­vacy.

Now, back to the con­tents of the re­port. Even though the re- would in prin­ci­ple boil down to reg­u­la­tions or laws which could man­date more than ob­ser­vance of eth­i­cal stan­dards, as they would pro­mote a higher level of al­go­rith­mic trans­parency, ac­count­abil­ity where re­quired, as well as pow­ers for reg­u­la­tors to im­pose them. These are top­ics also touched on by Rueben Binns in his, “Al­go­rith­mic Ac­count­abil­ity and Pub­lic Rea­son”, where on ac­count­abil­ity, he also cites in par­tic­u­lar Ar­ti­cle 13 to 15 of the GDPR. I also be­lieve that a close look at the Asilo­mar Prin­ci­ples par­tic­u­larly prin­ci­ples 6, 7, 8 and 22 re­pro­duced here­un­der, would also hint at the in­clu­sion of such mea­sures and nar­ra­tive:

6) Safety: AI sys­tems should be safe and se­cure through­out their op­er­a­tional life­time, and ver­i­fi­ably so where ap­pli­ca­ble and fea­si­ble.

7) Fail­ure Trans­parency: If an AI sys­tem causes harm, it should be pos­si­ble to as­cer­tain why.

8) Ju­di­cial Trans­parency: Any in­volve­ment by an au­tonomous sys­tem in ju­di­cial de­ci­sion-mak­ing should pro­vide a sat­is­fac­tory ex­pla­na­tion au­ditable by a com­pe­tent hu­man au­thor­ity.

22) Re­cur­sive Self-Im­prove­ment: AI sys­tems de­signed to re­cur­sively self-im­prove or sel­f­repli­cate in a man­ner that could lead to rapidly in­creas­ing qual­ity or quan­tity must be sub­ject to strict safety and con­trol mea­sures.

Cer­tain soft­ware (which is com­posed of an al­go­rith­mic code), al­ready al­lows com­pa­nies to “draw pre­dic­tions and in­fer­ences about per­sonal lives”. A clear case in point is the re­cent Cam­bridge An­a­lyt­ica de­ba­cle. For ex­am­ple, a ma­chine learn­ing al­go­rithm could suc­cess­fully iden­tify a data sub­ject’s sex­ual ori­en­ta­tion, po­lit­i­cal creed and so­cial groups and use this in­for­ma­tion to build pro­files, ser­vices as well as to cat­e­gorise data sub­jects. As the code learns pat­terns in the data, it also ab­sorbs bi­ases in it, per­pet­u­at­ing them. In one of the most strik­ing ex­am­ples, an al­go­rithm called COMPAS used by law en­force­ment agen­cies across mul­ti­ple states to as­sess a de­fen­dant’s risk of re­of­fend­ing was found to falsely flag black in­di­vid­u­als al­most twice as of­ten as whites.

This is AI’s so-called black box prob­lem, and our in­abil­ity to see the in­side of an al­go­rithm and un­der­stand how it ar­rives at a de­ci­sion. Maybe the fine un­der­ly­ing mes­sage out of this re­port by the UK’s House of Lords is that if this is left unchecked, par­tic­u­larly in an era where code can be law, and where many au­thors have al­ready sounded the bells on al­go­rith­mic gov­er­nance, this can have dev­as­tat­ing ef­fects on our so­ci­eties.

As Pro­fes­sors Nick Bostrom and Eliezer Yod­kowsky stated in “The Ethics of Ar­ti­fi­cial In­tel­li­gence”: “If ma­chines are to be placed in a po­si­tion of be­ing stronger, faster, more trusted, or smarter than hu­mans, then the dis­ci­pline of ma­chine ethics must com­mit it­self to seek­ing hu­man-su­pe­rior (not just hu­man-equiv­a­lent) nice­ness”.

Newspapers in English

Newspapers from Malta

© PressReader. All rights reserved.