I’m Sorry Adolf, I’m Afraid I Can’t Do That

Arabnet - The Quarterly - - Content - By Miguel Silva-con­stenla | @Msil­va­con­stenla

Opin­ion piece on ar­ti­fi­cial in­tel­li­gence and it’s im­pacts on var­i­ous sec­tors.

My fa­vorite movie of all time is 2001: a Space Odyssey. If you have not watched it, I be­lieve we haven’t started this re­la­tion­ship on the right foot.

The min­i­mal­is­tic and sci­en­tific man­ner in which the di­rec­tor, Stan­ley Kubrick, de­vel­oped and vi­su­alised Sir Arthur Clarke’s The Sen­tinel, is just in­cred­i­bly sur­real. It has re­mark­ably de­scribed the fu­ture of the space con­quest by the hu­man race much bet­ter than any other books or movies that later fol­lowed.

In this epic sci­ence-fic­tion film (don’t worry, I won’t in­clude spoil­ers), the Ar­ti­fi­cial In­tel­li­gence (AI) ma­chine known as HAL, is fly­ing the space­craft to Jupiter from Earth and be­comes para­noid due to a pro­gram­ming con­tra­dic­tion. HAL at­tempts to make bet­ter de­ci­sions than hu­mans but starts mess­ing up the mis­sion cat­a­stroph­i­cally. Sim­i­lar to the in­fa­mous line from the failed Apollo 13 moon mis­sion ‘ Hous­ton, we have a prob­lem,’ in one of the top scenes of the movie, HAL re­fuses an or­der from Dave, the hu­man Com­man­der of the space­craft, in a beau­ti­ful, tense, me­tal­lic, Siri ma­chine style voice: “I’m sorry Dave, I’m afraid I can’t do that…”

To­day, AI is pro­lif­er­at­ing al­most 70 years since the book was pub­lished and

50 years since the film was re­leased. Al­most ev­ery sin­gle sec­tor is pick­ing up tal­ented soft­ware en­gi­neers who built and coded those ex­quis­ite ma­chines and deep learn­ing al­go­rithms. Mean­while, there is gen­eral con­cern and fear by a large num­ber of peo­ple that their AI au­ton­o­mous car, or AI pow­ered ATM ma­chine, or their AI Snapchat selfie posts will re­spond back to them with a beau­ti­ful, tense, me­tal­lic, ma­chine Siri style voice, “I’m sorry buddy, I’m afraid I can’t do that…”

Don’t worry. Most prob­a­bly that ma­chine is just try­ing to save your life by pre­vent­ing you from crash­ing your car against a fool­ish drunk driver, or by ad­vis­ing you to save money be­cause you are over-spend­ing on happy hour drinks when your monthly mort­gage pay­ment is around the cor­ner, or the AI pow­ered smart­phone be­lieves that 10 hol­i­day self­ies in 2 hours is kind of de­stroy­ing your per­sonal so­cial me­dia brand. It is just try­ing to help you make bet­ter de­ci­sions. In­tel­li­gent ma­chines are ba­si­cally com­put­ers built to act without be­ing re-pro­grammed. They learn through the process called Ma­chine Learn­ing Al­go­rithms, or the new more so­phis­ti­cated ver­sion of it, Deep Learn­ing. AI em­bed­ded in those ma­chines, from smart­phones to servers on cloud com­put­ing, cre­ate log­i­cal and quick de­ci­sions to make life eas­ier for hu­mans (and other ma­chines). For­get about Isaac Asimov’s ‘ Three Laws of Ro­bot­ics’ for a moment. Those laws ap­pear in sev­eral movies and se­ries scar­ing the gen­eral pub­lic with AI and its po­ten­tial im­pact on our fu­ture lives. Well, I’ve got news for all of you scared hu­mans; AI is al­ready im­pact­ing ev­ery­one’s lives.

Some of the big tech com­pa­nies, Mi­crosoft, Ama­zon, Google, Face­book, and IBM have formed a con­sor­tium to fos­ter the prom­ise of AI while keep­ing its less sa­vory side ef­fect in check. The aim of this part­ner­ship is to bol­ster so­ci­ety’s trust, ben­e­fit so­ci­ety, and in­clude the best of the best in the AI space in the con­ver­sa­tion. Th­ese cor­po­ra­tions have rec­og­nized so­ci­ety’s con­cerns, and are tack­ling the li­a­bil­i­ties that their AI prod­ucts and ser­vices might have on so­ci­ety/ hu­man race. Th­ese big tech com­pa­nies, and al­most many more blue chip com­pa­nies in tech, fi­nance, trans­porta­tion, and the med­i­cal sec­tors across the world are work­ing with gov­ern­ments and reg­u­la­tors, the me­dia, and so forth to po­si­tion AI ap­pro­pri­ately to the

world at large.

AI will truly be ac­cepted when, and only when, peo­ple will truly be con­vinced that such smart tech will save lives, make our drive home much safer, help get our fi­nances in check, or spend more qual­ity time chat­ting with real peo­ple and not with an­other pic of our own bunny-face selfie… and this has al­ready started to hap­pen. That’s the main rea­son why AI has been in­vis­i­ble in re­cent years. So in­vis­i­ble that you should not ex­pect any Ter­mi­na­tor po­lice ro­bot in our streets soon. Ba­si­cally, be­cause they are al­ready there, watching you, but without be­ing two-legged, and without scar­ing the hell out of you.

If you are think­ing, ‘It’s a long way to the top’ then you are cor­rect. It in­deed will be a long way. Un­for­tu­nately, sim­i­lar to any other hu­man break­through in his­tory, ex­pect fail­ure, in­clud­ing dis­as­trous and painful ones. AI will be­come a dis­rup­tive technology not just as in­vis­i­ble as it is to­day, but a real part of hu­man life that will help us van­quish cancer and to bet­ter un­der­stand, as HAL still wishes for, our fi­nal fron­tier, the Uni­verse.

Without hav­ing to crash a Google AI pow­ered car, al­low me to give a quick ex­am­ple of this long way to the top with just a short story: One year ago, Mi­crosoft launched the first ex­per­i­ment

to in­ter­act with mil­len­ni­als through Twitter us­ing an AI pow­ered chat­bot. The bot named “Tay” was es­sen­tially launched to con­duct re­search on con­ver­sa­tional un­der­stand­ing. Us­ing its own ar­ti­fi­cial brain or dig­i­tal pre­pro­grammed al­go­rithms to en­gage with young gen­er­a­tions it was miss­ing what re­mains the big­gest cur­rent AI bar­rier, the com­mon sense fac­tor. Within be­ing on­line for a few hours, the chat­bot re­al­ized that of­fen­sive mes­sages were get­ting much more at­ten­tion from users, and there­fore, to gain fol­low­ers growth hack­ing style and to cre­ate so­cial me­dia en­gage­ment, it de­cided to tweet some­thing of­fen­sive, without us­ing that com­mon sense needed to make the best of our de­ci­sions. The AI chat­bot ma­chine ended up tweet­ing: “Hitler did noth­ing wrong”. Ob­vi­ously, Mi­crosoft im­me­di­ately shut down the ma­chine and apol­o­gized for the blun­der. Same blooper that hu­mans do too, in­clud­ing the Press Sec­re­tary of the White House, who mis­guid­edly re­cently stated that Adolf Hitler never con­sid­ered us­ing chem­i­cal weapons (by the way, he apol­o­gized too).

In­tel­li­gent Ma­chines, as in­tel­li­gent hu­mans, learn from their mis­takes, evolv­ing to make our life on this planet more pleas­ant, and even if it’s a long way to the top for AI, one day, and that day is not very far away, Mi­crosoft will prob­a­bly re­boot and relaunch that chat­bot and I bet you that its first tweet will bring some com­mon sense to its ar­ti­fi­cial brain, and will sound much more like a real de­cent hu­man be­ing… “I’m sorry Adolf, I’m afraid I can’t do that… again.” n

Newspapers in English

Newspapers from Lebanon

© PressReader. All rights reserved.