Care­ful of Ma­chines, Soul

The Economic Times - - The Edit Page - De­bku­mar Mi­tra

In 2016, a driver­less Tesla car crashed killing the test driver. It was not the first ve­hi­cle to be in­volved in a fa­tal crash, but was the first of its kind and the tragedy opened a can of eth­i­cal dilem­mas. With au­tonomous sys­tems such as driver­less ve­hi­cles there are two main grey ar­eas: re­spon­si­bil­ity and ethics. Widely dis­cussed at var­i­ous fo­rums is a ‘dilemma’ where a driver­less car must choose be­tween killing pedes­tri­ans or pas­sen­gers. Here, both re­spon­si­bil­ity and ethics are at play. The cold logic of num­bers that de­fine the mind of such sys­tems can sway it ei­ther way and the ‘fear’ is that pas­sen­gers sit­ting in­side the car have no con­trol.

Any new tech­nol­ogy brings a new set of chal­lenges. But it ap­pears that cre­at­ing ar­ti­fi­cial in­tel­li­gence-driven tech­nol­ogy prod­ucts is al­most like un­leash­ing the Franken­stein’s mon­ster. Ar­ti­fi­cial In­tel­li­gence (AI) is cur­rently at the cut­ting-edge science and tech­nol­ogy. Ad­vances in tech­nol­ogy, in­clud­ing ag­gre­gate tech­nolo­gies like deep learn­ing and ar­ti­fi­cial neu­ral net­works, are be­hind many new developments such as that Go play­ing world cham­pion ma­chine.

How­ever, though there is great pos­i­tive po­ten­tial for AI, many are afraid of what AI could do, and right­fully so. There is still the fear of a tech­no­log­i­cal sin­gu­lar­ity, a cir­cum­stance in which AI ma­chines would sur­pass the in­tel­li­gence of hu­mans and take over the world.

Re­searchers in ge­netic en­gi­neer­ing also face a sim­i­lar ques­tion. This dark side of tech­nol­ogy, how­ever, should not be used to de­cree clo­sure of all AI or ge­net­ics re­search. We need to cre­ate a bal­ance be­tween hu­man needs and tech­no­log­i­cal as­pi­ra­tions.

Much be­fore the cur­rent com­mo­tion over eth­i­cal AI tech­nol­ogy, cel­e­brated science-fiction au­thor Isaac Asi­mov came up with his laws of ro­bot­ics. Ex­actly 75 years ago in a 1942 short story Ru­naround, Asi­mov un­veiled an early ver­sion of his laws. The cur­rent forms of the laws are: 1. A ro­bot may not in­jure a hu­man be­ing or, through in­ac­tion, al­low a hu­man be­ing to come to harm

2. A ro­bot must obey or­ders given it by hu­man be­ings ex­cept where such or­ders would con­flict with the First Law

3. A ro­bot must pro­tect its own ex­is­tence as long as such pro­tec­tion does not con­flict with the First or Sec­ond Law

Given the pace at which AI sys­tems are de­vel­op­ing, there is an ur­gent need to put in some checks and bal­ances so that things do not go out of hand. There are many or­gan­i­sa­tions now look­ing at le­gal, tech­ni­cal, eth­i­cal and moral as­pects of a so­ci­ety driven by AI tech­nol­ogy. The In­sti­tute of Elec­tri­cal and Elec­tron­ics En­gi­neers (IEEE) al­ready has Eth­i­cally Aligned De­sig- ned, an AI frame­work ad­dress­ing the is­sues in place. AI re­searchers are draw­ing up a laun­dry list sim­i­lar to Asi­mov’s laws to help peo­ple en­gage in a more fear­less way with this beast of a tech­nol­ogy.

In Jan­uary 2017, Fu­ture of Life In­sti­tute (FLI), a char­ity and out­reach or­gan­i­sa­tion, hosted their sec­ond Ben­e­fi­cial AI Con­fer­ence. AI ex­perts de­vel­oped ‘Asilo­mar AI Prin­ci­ples’, which en­sures that AI re­mains ben­e­fi­cial and not harm­ful to the fu­ture of hu­mankind.

The key points that came out of the con­fer­ence are: “How can we make fu­ture AI sys­tems ro­bust, so that they do what we want with­out mal­func­tion­ing or getting hacked? How can we grow our prosperity through au­toma­tion while main­tain­ing peo­ple’s re­sources and pur­pose? How can we up­date our le­gal sys­tems to be more fair and ef­fi­cient, to keep pace with AI, and to man­age the risks as­so­ci­ated with AI? What set of val­ues should AI be aligned with, and what le­gal and eth­i­cal sta­tus should it have?”

Ever since they un­shack­led the power of the atom, sci­en­tists and tech­nol­o­gists have been at the fore­front of the move­ment em­pha­sis­ing ‘science for the bet­ter­ment of man’. This duty was forced upon them when the first atom bomb was man­u­fac­tured in the US. Lit­tle did they re­alise that a search for the atomic struc­ture could give rise to nasty sub­plot? With AI we are at the same sit­u­a­tion or maybe worse. No won­der at an IEEE meet­ing that gave birth to eth­i­cal AI frame­work, the dom­i­nant thought was that the hu­man and all liv­ing be­ings must re­main at cen­tre of all AI dis­cus­sions. Peo­ple must be in­formed at ev­ery level right from the de­sign stage to de­vel­op­ment of the AI-driven prod­ucts for ev­ery­day use.

While it is a laud­able ef­fort to de­velop eth­i­cally aligned tech­nolo­gies, it begs an­other ques­tion that has been raised at var­i­ous AI con­fer­ences. Are hu­mans eth­i­cal? The prom­ise of de­vices that not only meet our house­hold needs but an­tic­i­pate them as well has been around for decades. To date, that prom­ise re­mains largely un­ful­filled…A tip­ping point may be at hand. In­creased com­put­ing power, ad­vanced big data an­a­lyt­ics and the emer­gence of ar­ti­fi­cial in­tel­li­gence (AI) are start­ing to change the way we go about our busy lives. The vi­sion we present in this ar­ti­cle may seem “out there”, but it sim­ply rep­re­sents the con­flu­ence of those tech­no­log­i­cal developments and re­al­i­sa­tion of ex­ist­ing trends. Those trends, along with what’s just on the hori­zon, ac­cord­ing to our re­search, sug­gest to us that within a decade, many of us will live in “smart homes” that will fea­ture an in­tel­li­gent and co­or­di­nated ecosys­tem of soft­ware and de­vices, or “home­bots,” which will man­age and per­form house­hold tasks and even es­tab­lish emo­tional connections with us. A smart home will be akin to a hu­man cen­tral ner­vous sys­tem. A cen­tral plat­form, or “brain,” will be at the core. In­di­vid­ual home­bots of dif­fer­ent com­put­ing power will ra­di­ate out from this plat­form and per­form a wide va­ri­ety of tasks, in­clud­ing su­per­vis­ing other bots. Home­bots can be as di­verse as their roles: big, small, in­vis­i­ble (such as the soft­ware that runs sys­tems or prod­ucts), shared and per­sonal. Some home­bots will be com­pan­ions or as­sis­tants, oth­ers wealth plan­ners and ac­coun­tants. We will have home­bots as coaches, win­dow wash­ers and house­hold man­agers, through­out our home.

From: A Smart Home is Where the Bot is

It’s us ver­sus them, C3

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.