From bi­na­ry lo­gic to Fuz­zy Lo­gic

Investir en Europe - - DOSSIER SPECIAL FILE -

The spe­cia­list of Post- Science in Fuz­zy Lo­gic Hugh Ching wrote an ar­ticle( about a me­thod(

14) 15) to of­fer quan­ti­ta­tive so­lu­tions to pro­blems which will be reach in 2100- 2500 ( « the so­lu­tion of va­lue will re­place mo­ra­li­ty and re­li­gion » ) , the cur­rent scien­ti­fic me­thod is em­pi­ri­cal ve­ri­fi­ca­tion. And in around two or three millen­nium, the me­thod and so AI will be able to do eve­ry­thing the crea­tor did, “the Ro­bot will be­come the hu­man, and soft­ware, DNA”. He ex­plains that fuz­zy lo­gic im­proves me­thods be­cause “rea­li­ty is fuz­zy” (“pre­ci­sion is sa­cri­fi­ced du­ring the pro­cess of ex­pan­ding a crea­tion’s range of to­le­rance”). “The range of to­le­rance of the li­ving sys­tem must be wide en­ough to co­ver all of the pos­si­bi­li­ties of a per­ma­nent­ly un­cer­tain fu­ture”.

What are AI chal­lenges ?

The Dutch Po­li­cy Ad­vi­ser in Pu­blic Af­fairs Ber­nard said « One chal­lenge will be to use ro­bots and AI to sup­ply the 9 bil­lion people in 2050 with wa­ter, food, hou­sing, schools & trai­nig and so on » . And The Astro­phy­si­cist Ste­phen Haw­king be­lieve that this task isn’t suf­fi­cient: in 2100 we will be obli­ged to send people on Mars or el­sew­here to sur­vive.

An Ame­ri­can com­pu­ter science student Jack Ban­dy wrote an ar­ticle( about the conse­quences

16) of using new tech­no­lo­gy and he was wor­ried about “fra­ming au­to­ma­tion as an eco­no­mic di­lem­ma ra­ther than an ethi­cal di­lem­ma”. He gives concrete examples of sub­sti­tu­tion of men by ma­chines not on­ly in the ma­nual sec­tors but now in the crea­tive and in­ter­es­ting jobs ( ar­chi­tects, air­plane pi­lots, ..). The AI & Se­cu­ri­ty Threats re­port( ex­plains that “AI sys­tems

17) can ex­ceed hu­man ca­pa­bi­li­ties” ( gi­ven the example of chess) in ma­ny ways but “they can al­so fail in ways hu­man ne­ver would” ( for example da­ta poi­so­ning at­tacks). Di­s­in­for­ma­tion ( po­li­ti­cal cam­pai­gns,..), cri­mi­nal cy­ber- of­fenses ( ter­ro­rist at­tacks,..). They ex­plain that “at­ta­ckers are li­ke­ly to le­ve­rage the gro­wing ca­pa­bi­li­ties of rein­for­ce­ment lear­ning, in­clu­ding deep rein­for­ce­ment lear­ning”, some “laws and norms pros­cribe cer­tain ac­tions in cy­bers­pace but this le­gal en­for­ce­ment is dif­fi­cult across na­tio­nal boun­da­ries”. And their conclu­sion is that it must be made a prio­ri­ty and ana­ly­zed AI from this se­cu­ri­ty angle at all le­vels.

So, it is urgent to set up a new eco­no­mic and so­cial mo­del to in­te­grate all of these concerns... ( 14) Hugh Ching, « Fuz­zy Lo­gic, The ge­nius of Lot­fi Za­deh ‘ Fa­ther of Fuz­zy Lo­gic’ » , IEEE In­dus­trial Elec­tro­nics Magazine, De­cem­ber 2017.

( 15) La lo­gique clas­sique est une pro­po­si­tion vraie( 1) ou fausse ( 2). La lo­gique floue per­met un de­gré de vé­ri­té entre 0 et 1. Clas­si­cal lo­gic is a true pro­po­si­tion ( 1) or a false pro­po­si­tion ( 2). Fuz­zy lo­gic al­lows a de­gree of truth bet­ween 0 and 1.

( 16) Jack Ban­dy, « Au­to­ma­tion Mo­de­ra­tion : Fin­ding Sym­bio­sis with An­ti- Hu­man Tech­no­lo­gy » , Uni­ver­si­ty of Ken­tu­cky, AI mat­ters, vol. 3, Win­ter 2018.

( 17) AI & Se­cu­ri­ty Threats re­port « The Ma­li­cious Use of Ar­ti­fi­cial In­tel­li­gence: Fo­re­cas­ting, Pre­ven­tion and Mi­ti­ga­tion”, 26 au­thors from: Fu­ture of Hu­ma­ni­ty Ins­ti­tute, Uni­ver­si­ty of Ox­ford, Centre for the Stu­dy of Exis­ten­tial Risk, Uni­ver­si­ty of Cam­bridge, Center for a New Ame­ri­can Se­cu­ri­ty Elec­tro­nic Fron­tier Foun­da­tion, Open AI, 02/ 2018.

Newspapers in French

Newspapers from France

© PressReader. All rights reserved.