Do Not Blame Ar­ti­fi­cial In­tel­li­gence

AR­TI­FI­CIAL IN­TEL­LI­GENCE

Arabnet - The Quarterly - - Industry Stories - By Daniel Merege

Daniel Merege, CEO & Founder of Ci­tytech shares his opin­ion on the pros and cons of Ar­ti­fi­cial In­tel­li­gence and shares his views on the ben­e­fits of AI.

Although not very new, Ar­ti­fi­cial In­tel­li­gence (AI) has gained promi­nence in re­cent years, both pos­i­tive and neg­a­tive. As an ex­am­ple, there is an idea that as ma­chines learn to "think" as hu­mans, they may, in the near fu­ture, take hu­mans’ work po­si­tions that to­day em­ploy mil­lions of peo­ple around the world. But, is AI re­ally guilty? The con­cept and tech­niques of AI are not so new, at least for com­put­ers’ age. Dur­ing the 1950s, there was a great en­thu­si­asm for sci­en­tific re­search that com­bined math­e­mat­ics with com­pu­ta­tional tech­niques, aim­ing to teach ma­chines to make de­ci­sions and to in­fer things from what they learned. How­ever, in those days there was no com­puter pro­cess­ing power to per­form heavy cal­cu­la­tions, or a large amount of

avail­able data, that could jus­tify the adop­tion of these tech­niques

by in­dus­tries and com­pa­nies. Within the last five years, this sce­nario com­pletely changed. We have achieved tremen­dous com­put­ing power to en­able these heavy cal­cu­la­tions, which are re­quired for ma­chines to "learn" pat­terns. Also, we now pro­duce and have avail­able a large amount of data, which serves as raw ma­te­rial for ma­chines to learn. These facts sparked tech­nol­ogy com­pa­nies, such as Google and Face­book, to de­velop tech­niques and prod­ucts that made AI ac­ces­si­ble and fea­si­ble. That's why we talk a lot about it nowa­days. The truth is that AI is ex­tremely use­ful to make com­put­ers our al­lies on anal­y­sis, and the pre­dic­tion of prob­lems and so­lu­tions, sim­pler to more com­plex. With these tech­niques, it is pos­si­ble for the com­puter, for ex­am­ple, to iden­tify cancer by an­a­lyz­ing im­ages. To per­form this ac­tiv­ity, a group of hu­man doc­tors clas­sify thou­sands of im­ages, say­ing which is cancer and which is not cancer. Com­put­ers end up cre­at­ing a math­e­mat­i­cal model to an­a­lyze and in­di­cate if there im­age in­cludes cancer or not, in a mat­ter of sec­onds. Other ex­am­ples can be found in dif­fer­ent sec­tors, such as ur­ban ser­vices man­age­ment, SPAM fil­ters, and chat­bots. The po­ten­tial ap­pli­ca­tions of AI are end­less and can greatly im­prove the pro­duc­tiv­ity and ef­fec­tive­ness of pro­cesses, prod­ucts and ser­vices. The is­sue is the eth­i­cal bound­aries of our re­la­tion­ship with ma­chines. I take the ex­am­ple of the de­vel­op­ment of chem­istry. With the same ba­sic knowl­edge, we can pro­duce medicines that save lives , but we can also pro­duce chem­i­cal weapons that de­stroy lives. Or with the com­puter it­self, which brings us both the mar­vels of the dig­i­tal world as well as cy­ber threats and the loss of pri­vacy. Like­wise, AI’S main con­cern is in its ap­pli­ca­tion, not in the de­vel­op­ment of its tech­niques per se. There­fore, com­pu­ta­tional ethics here is an es­sen­tial fac­tor. We must in­vest in the de­vel­op­ment of the com­puter in­dus­try, which can bring us many ad­vances in terms of pro­duc­tion and qual­ity of life, but we must de­ter­mine the limit in terms of ap­pli­ca­tion of this knowl­edge. We want ma­chines that help us de­tect ur­ban prob­lems quickly and proac­tively, but we do not want ma­chines that iden­tify a per­son's sex­ual ori­en­ta­tion, based on his/ her fa­cial im­ages, or that can help ma­li­cious peo­ple act de­struc­tively and against hu­man rights. We don’t need that. The point here is that we need to de­fine, as a so­ci­ety, the val­ues we want to ex­tract from all these tech­niques. It is very im­por­tant to es­tab­lish global-level leg­is­la­tion to de­fine eth­i­cal rules for AI ap­pli­ca­tions, as we see for med­i­cal prac­tices, for ex­am­ple. Re­gard­less the track AI takes from here, we can not blame it. De­ci­sions, af­ter all, are al­ways hu­man, and it is with them that we should be con­cerned. And even if an ap­pli­ca­tion might, for ex­am­ple, lead to hu­man jobs losses, we should look at what ac­tion we take now to pre­pare the af­fected work­ers to win new jobs that will emerge from that. Every­thing is a mat­ter of evo­lu­tion and im­prove­ment, and we can cer­tainly live in a world where AI brings us com­fort, well-be­ing and qual­ity of life. We should not blame it for the de­ci­sions that peo­ple in­ter­ested in bad ap­pli­ca­tions of AI will make. These de­ci­sions are ex­clu­sively hu­man.

Newspapers in English

Newspapers from Lebanon

© PressReader. All rights reserved.