What hap­pens when the tech­nol­ogy that hu­mans have cre­ated be­gins to ma­nip­u­late them? We are on the brink of find­ing out.

Business Today - - CONTENTS - Il­lus­tra­tion by Raj verma

What hap­pens when the tech­nol­ogy that hu­mans have cre­ated be­gins to ma­nip­u­late them? We are on the brink of find­ing out

UN­TIL NOW IT HAS been the stuff of sci­ence fic­tion, movies and night­mares. With the star­tling ad­vances in ar­ti­fi­cial in­tel­li­gence (AI) over the past two-three years, could we soon reach a stage where we would be deftly ma­nip­u­lated by the tech­nol­ogy we cre­ated? The an­swer is, al­most cer­tainly, yes.

After outsmarting hu­mans on so many fronts and start­ing to master the do­mains that hu­mans be­lieved were theirs alone (such as cre­ativ­ity), smart ma­chines are likely to rule in the near fu­ture. But it is quite fright­en­ing when hu­mans fall prey to schem­ing ro­bots.

Whether we like it or not, AI is al­ready be­ing used to ma­nip­u­late peo­ple, mostly to get them shop­ping. Was there an oc­ca­sion when you were about to buy an item on­line but left it in the cart as you were hav­ing sec­ond thoughts? The next thing you knew was a bunch of no­ti­fi­ca­tions land­ing up, say­ing things like “Come on, you know you want it!” Or “(Your first name), you know you need it!” In fact, one’s en­tire time is spent on­line on plat­forms dom­i­nated by this sort of ma­nip­u­la­tion.

Now imag­ine that a chat­bot is used to make the process more en­gag­ing and hu­man-like. By con­vey­ing bits and pieces of in­for­ma­tion about you, the chat­bot could hold a lively con­ver­sa­tion and steer you to­wards mak­ing the pur­chase.

To take this up a few notches, imag­ine a robot en­gag­ing with a per­son in an at­tempt to per­suade him/her. It was the sub­ject of a long-ago study in which vol­un­teer sub­jects were asked to press a but­ton to turn off a com­puter cat. But the cat begged not to be killed off and per­suaded them not to switch it off. A re­cent study held at the Univer­sity of Duis­burg-Essen in Ger­many and re­ported by PLOS One, had 89 vol­un­teers in­ter­act­ing with a robot called Nao. The sub­jects were ‘ help­ing’ the robot and seemed to de­velop an emo­tional at­tach­ment to it in no time. When re­searchers asked the sub­jects to turn off the robot, quite a few were un­able to do it, es­pe­cially when the robot used body lan­guage to strengthen its ver­bal pleas. Quite clearly, hu­mans have quickly an­thro­po­mor­phised the robot – as they have been do­ing with vir­tual toys – and are re­luc­tant to harm it, hav­ing been emo­tion­ally ma­nip­u­lated.

Beg­ging not to be turned off is prob­a­bly not as dis­turb­ing as the pos­si­bil­ity of a robot like Sophia, the world’s first robot cit­i­zen, try­ing the same trick. Such a sce­nario was played out on a TV show called The Good Place. There, a pretty hu­manoid called Janet asked two peo­ple to press a switch and de­stroy her. But ev­ery time they ap­proached her, the robot screamed and pleaded to let her go. As they backed off, the robot switched to ex­plain­ing that she was not hu­man and would feel no pain. It con­tin­ued un­til Janet was fi­nally ‘mur­dered’ and the ma­nip­u­lated hu­mans had to flee the ‘crime’ scene.


Newspapers in English

Newspapers from India

© PressReader. All rights reserved.