The tech world is in the grip of a rag­ing de­bate on AI with Tesla and SpaceX founder Elon Musk and Face­book founder Mark Zucker­berg en­gaged in a pub­lic ver­bal duel

Business Today - - COVER STORY A.I. - @ra­jeev­dubey

“I THINK PEO­PLE WHO ARE NAYSAY­ERS AND TRY TO DRUM UP TH­ESE DOOMS­DAY SCE­NAR­IOS — I DON’T UN­DER­STAND IT. IT’S RE­ALLY NEG­A­TIVE, AND IN SOME WAYS I THINK IT'S PRETTY IR­RE­SPON­SI­BLE.” MARK ZUCKER­BERG CEO, Face­book “AI could spell the end of the hu­man race.” STEPHEN HAWK­ING Physi­cist “I am in the camp that is con­cerned about su­per in­tel­li­gence… First the ma­chines will do a lot of jobs for us and not be su­per in­tel­li­gent. That should be pos­i­tive if we man­age it well. A few decades after that though the in­tel­li­gence is strong enough to be a con­cern. ” BILL GATES Co- Founder, Mi­crosoft

While McKin­sey Global In­sti­tute reports high tech, tele­com, and fi­nan­cial ser­vices to be the ear­li­est adopters of ma­chine learn­ing and AI in the world, in In­dia the ear­li­est use cases are in health­care, HR and e-com­merce. Yet, in each case, it’s truly dis­rup­tive. Some ex­am­ples:


The fid­get spin­ner goes on and on…and on, only to slow down when he pauses to make a point. Then, again. This time ever so vig­or­ously, as Akhil Gupta nar­rates an­grily, and some­what mis­chie­vously, a mur­der­ous at­tack on the staff and of­fice of no­bro­—the com­pany Gupta co-founded with 3 oth­ers to elim­i­nate bro­kers from prop­erty rent­ing and buy­ing. Nearly 60 lo­cal bro­kers stormed into the com­pany’s of­fice in a Ban­ga­lore sub­urb, smashed fur­ni­ture and com­put­ers and thrashed Gupta and co-founder Amit, be­sides other staff. They were vent­ing their frus­tra­tion, as much at the com­pany as at its abil­ity to bar them from the web­site, even when they posed as gen­uine cus­tomers.

Lit­tle did they know, says Gupta mis­chie­vously, that no­bro­ker de­ployed mul­ti­ple AI tools to iden­tify—and shut out—bro­kers. Gupta won’t say how he did it. That’s the trade se­cret. He drops broad hints though: bro­kers have a pe­cu­liar search pat­tern and have a dig­i­tal trail on In­ter­net. Google’s ma­chine learn­ing soft­ware Ten­sor Flow, Google An­a­lyt­ics, speech recog­ni­tion and op­ti­cal char­ac­ter recog­ni­tion used in tan­dem were able to iden­tify them.

Just as IBM of­fers Wat­son’s deep learn­ing, Google has Ten­sor Flow and Mi­crosoft has Azure Cloud, Face­book has open­sourced Caffe, its own deep learn­ing mod­ule. And Ama­zon has MX Net—an open source deep learn­ing soft­ware.

Ban­ga­lore-based Tre­dence was founded as re­cently as 2013. To­day, it has a $12 mil­lion busi­ness in mak­ing sense of un­struc­tured data. For in­stance, it built a model for one of the world’s big­gest FMCG firms about where the ice cream trikes should be lo­cated in the city for best re­turn on in­vest­ment.

Based on the client’s data, the model scrapes the In­ter­net for pub­lic in­for­ma­tion such as com­peti­tors’ trikes, lo­ca­tion of schools, hos­pi­tals, shop­ping ar­eas, his­tor­i­cal sites, even traf­fic and de­mo­graphic data to sug­gest where the ice cream trikes should be placed. Go­ing by re­sults so far, it could grow global ice cream busi­ness by 8-10 per cent. It has since been launched in Dur­ban, Bangkok, Madrid and a few cities in Pak­istan and In­dia. It is now plan­ning a pan-Euro­pean launch.

At De­vanakonda vil­lage of Kurnool district in Andhra Pradesh, Mi­crosoft’s cloud agri­cul­ture project de­ployed ar­ti­fi­cial in­tel­li­gence and ma­chine learn­ing, big data and an­a­lyt­ics to im­prove crop yields. For rain fed crops, the tim­ing of sow­ing is the big­gest dif­fer­en­tia­tor be­tween a good crop and a failed crop. Mi­crosoft used Azure Cloud plat­form to com­pute short term weather pre­dic­tion, soil qual­ity data and pre­vi­ous crop his­tory to send reg­u­lar up­dates to lo­cal farm­ers on their phones in their na­tive lan­guage, in­clud­ing in­form­ing them when not to sow. When the model com­puted that soil mois­ture was suf­fi­cient for seed ger

“If I were to guess ... what our big­gest ex­is­ten­tial threat is, it’s prob­a­bly that( AI). So we need to be very care­ful with AI. In­creas­ingly sci­en­tists think there should be some reg­u­la­tory over­sight maybe at the national and in­ter­na­tional level, just to make sure that we don’t do some­thing very fool­ish. With AI we are sum­mon­ing the de­mon ... Zucker­berg's un­der­stand­ing of the sub­ject is lim­ited” ELON MUSK Co- Founder, Tesla and SpaceX

mi­na­tion and weather fore­cast pre­dicted more rain­fall, it pinged farm­ers to sow. Those who fol­lowed the model’s pre­dic­tion reaped a 30 per cent higher yield.

What be­gan with sow­ing is now widen­ing to soil nu­tri­tion in col­lab­o­ra­tion with UN agency ICRISAT and also rec­om­men­da­tions on when to feed fer­tiliser or what kind of fer­tiliser weed­i­cide to use. The gov­ern­ment of Te­lan­gana has now signed an MoU with Mi­crosoft to de­ploy this con­cept in the state.

Di­a­betic retinopa­thy pa­tients need to be screened at least once a year to pre­vent vi­sion loss. A spe­cialised cam­era takes a shot of the retina which is then graded by doc­tors on a 5-point scale. Grad­ing is com­plex and spe­cialised as doc­tors need to look for very small le­sions. At times, smaller le­sions get missed. In many parts of the world, due to short­age of eye­care pro­fes­sion­als, the de­lay causes loss of vi­sion be­fore di­ag­no­sis. This is en­tirely pre­ventable.

Just about a year ago, a chance en­counter got to­gether a Google em­ployee and doc­tors at Aravind Eye Hos­pi­tals and Sankara Nethralaya who had al­ready be­gun screen­ing their pa­tients for di­a­betic retinopa­thy. “This ef­fort was oc­cur­ring in par­al­lel with­out any of us re­al­is­ing. We came up with this project in col­lab­o­ra­tion,” says Lily Peng, Prod­uct Man­ager, Google Brain AI Re­search Group.

The data was fed to ma­chine learn­ing frame­work Ten­sor Flow. “We’re study­ing the im­pact on ef­fi­ciency. It will in­crease the reach of the screen­ing pro­grammes into ru­ral ar­eas. We hope this will democra­tise health­care,” says Peng who is now tak­ing the pro­gramme to US hos­pi­tals. Google says this tech­nol­ogy can be de­ployed in iden­ti­cal ap­pli­ca­tions such as can­cer biop­sies (1 in 12 can­cer biop­sies mis­di­ag­nosed).

In De­cem­ber 2016, Mi­crosoft and Hy­der­abad’s LV Prasad Eye In­sti­tute an­nounced a global pro­gramme, Mi­crosoft In­tel­li­gent Net­work For Eye­care (MINE), to use AI to pre­vent avoid­able blind­ness and to pro­vide eye care ser­vices at scale around the world. Of the 285 mil­lion vis­ually im­paired world­wide, 55 mil­lion are in In­dia. Mi­crosoft will build AI mod­els for eye care, lever­ag­ing Cor­tana In­tel­li­gence Suite.

If you are a seller on Ama­zon, your ap­pli­ca­tion to the e-com­merce gi­ant for a loan would likely get ap­proved or re­jected by a ma­chine. Ama­zon uses ma­chine learn­ing not just to help iden­tify new prod­ucts for sell­ers to grow their busi­ness but also to iden­tify the risk as­so­ci­ated with sell­ers be­fore it lends to them via the Ama­zon lend­ing busi­ness.

“We use ma­chine learn­ing to iden­tify fraud­u­lent sell­ers. We have so much past data: the num­ber of times cus­tomers com­plained; how many times he didn’t ship the prod­uct or shipped a bro­ken prod­uct. Based on the data, we can pre­dict,” says Ra­jeev Ras­togi, Ama­zon’s Di­rec­tor, Ma­chine Learn­ing.

Ma­chine learn­ing and AI power mul­ti­ple features at

ride hail­ing firm Ola such as the ride shar­ing fea­ture and Ola Play, its con­nected car plat­form where cus­tomers can lis­ten to mu­sic, ra­dio, and watch TV shows. AI has helped ru­ral e-com­merce com­pany Storek­ing tell its re­tail­ers what prod­ucts to pur­chase and stock, de­pend­ing on pa­ram­e­ters such as what other re­tail­ers are buy­ing in that ge­og­ra­phy. By sug­gest­ing prod­ucts that are more likely to move, it frees up their work­ing cap­i­tal.

In the man­u­fac­tur­ing world, ABB is work­ing on con­nected and col­lab­o­ra­tive ro­bots where hu­mans in­ter­act and work to­gether with ro­bots side by side, not be­hind a fence. “That’s sen­sor­ing, real time an­a­lyt­ics of what the hu­man is do­ing and what the robot is do­ing and to en­sure that they can work to­gether,” says ABB’s IDC cen­tre head Wil­helm Wiese at Ban­ga­lore.

ABB is al­ready into fleet man­age­ment of ships. It’s com­bin­ing its pre­dic­tive main­te­nance tech­nolo­gies with geo-tag­ging and weather reports for real time feed­back on the ship’s per­for­mance. “The most ex­pen­sive thing that can hap­pen in ship­ping is if it stalls on the high seas. We’re telling the cap­tain you have a prob­lem with this ma­chine, if you slow down your speed by 30 per cent then you will be able to reach the next har­bour,” says ABB’s Wiese.


Mer­cu­rial tech in­vestor Mark Cuban be­lieves AI will likely cre­ate the world’s first dol­lar tril­lion­aire. Could that be an In­dian? Far from it. Ac­cord­ing to a McKin­sey Global In­sti­tute pa­per on AI, in 2016 alone glob­ally com­pa­nies spent up to $39 bil­lion in de­vel­op­ing AI (US com­pa­nies spent 66 per cent of that, fol­lowed by 17 per cent by Chi­nese). Con­sult­ing firm PwC be­lieves AI could grow global GDP by some $30 tril­lion by 2030, al­most half of that in China. So where does that leave In­dia?

AI is where In­dia can cre­ate a nat­u­ral ad­van­tage just like in IT, ITeS. It has an English speak­ing pop­u­la­tion, mil­lions of tech pro­fes­sion­als. Most im­por­tantly, it gen­er­ates DATA—that great fuel be­hind AI. Those are just the build­ing blocks to emerge as the pre­mier global hub for AI-based prod­ucts, ser­vices and apps. “Oil re­fin­ers make more money than drillers. Peo­ple who will re­fine the data will ul­ti­mately make much more money than peo­ple who cre­ate data," says IBM’s Mehro­tra.

But it re­quires a holis­tic ap­proach from the gov­ern­ment. In­dia must lever­age its nat­u­ral align­ment with the US. The world’s fore­most Tier I play­ers—Google, Ama­zon Web Ser­vices, IBM, Face­book and Mi­crosoft—are all US-based but have re­stricted or lim­ited pres­ence in China. A bustling AI econ­omy has the abil­ity to gen­er­ate mil­lions of high-end jobs. The MIT Sloan Man­age­ment Re­view says that each in­no­va­tion job cre­ates at least 5 other jobs—just what In­dia needs right now.


A po­ten­tial cus­tomer of a Pol­ish bank up­loads a photo of his ID and ap­pears for a video ver­i­fi­ca­tion. If there’s a match be­tween the video and ID, then it has to be as­cer­tained whether it’s the national ID or it’s forged. The on­line ver­i­fi­ca­tion is ap­proved or re­jected by Ban­ga­lore-based Signzy, even though even­tu­ally of­fi­cers of the Pol­ish bank would ac­cept the ap­pli­cant as their cus­tomer.

“Things that a hu­man would have done by look­ing at them are be­ing done by APIs and al­go­rithms,” says Ankit Ratan, co-Founder Signzy. It has shrunk man­ual cus­tomer ver­i­fi­ca­tion process from three weeks to less than three days, even though ver­i­fi­ca­tion and match­ing hap­pens real time. Signzy de­ployed IBM Wat­son’s ma­chine learn­ing ca­pa­bil­ity for iden­ti­fy­ing im­ages, con­vert­ing speech to text and trans­for­ma­tion of doc­u­ments into dig­i­tal mode. Be­sides global banks and FIs, Signzy pro­vides auto ver­i­fi­ca­tion ser­vices to SBI, ICICI, MSwipe, PayU and LinkedIn.

Un­til a year ago, Ama­ was al­ready a truly global e-com­merce com­pany—yet it was not. For, only prod­ucts listed in English could be sold around the world. A list­ing in Ital­ian, for in­stance, couldn’t even be dis­cov­ered within EU.

Ama­zon used ma­chine trans­la­tion to fea­ture even prod­ucts listed in Ital­ian across its eight Euro­pean mar­kets in dif­fer­ent lan­guages, in­clud­ing in Ger­man, French, Span­ish or English and vice versa. The tech­nol­ogy is now be­ing de­ployed in other parts of the world. “At some point it would make sense for us to do pages be­tween In­dian lan­guages,” says Ama­zon’s Ras­togi.

Chen­nai-based Tex­tient has cre­ated a cog­ni­tive an­a­lyt­ics plat­form to sift through con­ver­sa­tions on the In­ter­net, such as prod­uct re­views and so­cial me­dia chatter, to pro­vide


in­sights to com­pa­nies on their prod­ucts, ser­vices or brands.

“We un­der­stand hu­man think­ing and be­havioural as­pects. De­cod­ing this is com­plex be­cause what lies un­der­neath is a psy­cho­log­i­cal aspect. We take more than 50 pa­ram­e­ters of a hu­man be­ing,” says Sankar Nagarajan, Founder, Tex­tient.

An in­sur­ance firm in the western world in­stalls a cam­era on car dash­boards to an­a­lyse whether a bump is an ac­ci­dent (in which case it needs to send alerts to the right peo­ple). It analy­ses the ve­hi­cle through mo­tion, speed, etc. for the state of the ve­hi­cle and po­ten­tially the driver’s be­hav­iour. Noida-based The Smart Cube works with the firm to work on such video an­a­lyt­ics through AI.

In an­other of­fer­ing, The Smart Cube scours the In­ter­net to alert its client about an in­her­ent risk in the sup­ply chain. The com­pany came up with the prod­uct for its pharma clients who out­source nearly all of their man­u­fac­tur­ing and hence need early alerts in case of an ‘event’. It tracks in real-time what’s be­ing pub­lished about the client’s sup­pli­ers on web­sites, me­dia and so­cial me­dia. The ob­jec­tive is to fig­ure out whether that in­for­ma­tion is ‘risky’ in terms of fi­nan­cial, strate­gic, ma­te­ri­als short­age and rep­u­ta­tional risks, even a CEO’s exit risk. “Global sup­ply chains have be­come very com­plex and very tight. If there’s a risk to one sup­plier, there’s a risk to you as a man­u­fac­turer,” says The Smart Cube founder Sameer Walia. The en­tire en­gine is AI-based and uses ma­chine learn­ing and nat­u­ral lan­guage pro­cess­ing to as­sess whether it’s risky or not, the cat­e­gory and the level of risk. It sends those alerts to the cat­e­gory man­agers and own­ers so that pre­ven­tive or proac­tive ac­tions could be taken.

Used ve­hi­cle mar­ket­place Droom uses AI for dis­count­ing and pro­mo­tions. The AI en­gine takes into ac­count 100 plus fac­tors to de­ter­mine the dis­count, in­clud­ing buyer and seller be­hav­iour, buyer and seller his­tory as well as ve­hi­cle de­tails. “In the past, we had just one dis­count. Now, we have two lakh com­bi­na­tions of dis­count­ing — hu­manly, we could only move from one to five,” says Founder and CEO San­deep Agarwal.


Jet Air­ways cus­tomer on Twit­ter (sar­cas­ti­cally): Thanks @ je­tair­ways for dr op­ping me in Kolkata and my bags in Hy­der­abad. Jet Air­ways’ re­ply: Thanks. Glad you en­joyed our ser­vices.

Sar­casm is surely not one of the virtues of chat­bots just yet, even though they have rev­o­lu­tionised cus­tomer care by re­plac­ing hu­mans as the first point of con­tact. Truly, bots still don’t un­der­stand most ex­treme hu­man emo­tions, in­clud­ing frus­tra­tion, anger, taunt or de­light. That’s why they’re right at the bot­tom of the AI evo­lu­tion curve and will mostly fail the Tur­ing Test. That’s as much a prob­lem as an op­por­tu­nity.

Just as the Jet Air­ways chat­bot fell prey to sar­casm, most brand or cor­po­rate chat­bots are ei­ther that silly or churn out stan­dard, sani­tised and mostly bor­ing re­sponses be­cause they are trained to work within the bound­aries of pre-re­hearsed FAQs (fre­quently asked ques­tions). When Mi­crosoft tried an AI pow­ered chat­bot ‘ Tay’ on so­cial me­dia plat­forms it ex­posed real dan­gers. In un­der 24 hours its tweets be­came racist, in­clud­ing “Hitler was right”. Mi­crosoft took down the bot for ‘ad­just­ment’. Tay’s han­dle re­mains silent since.

This July, Face­book shut down bots Bob and Alice (be­ing trained to ne­go­ti­ate with each other) when it re­alised the two di­verged to de­velop their own lan­guage. In one of the ex­changes, Bob be­gan by say­ing "I can i i ev­ery­thing

else". To that, Alice said, "Balls have zero to me to me to me…" They cre­ated an ‘ef­fi­cient’ lan­guage us­ing a vari­a­tion of th­ese two sen­tences. It may seem gib­ber­ish to a ca­sual ob­server, but Face­book AI Re­search Lab was alarmed.

Ya­ agrees its chat­bots may not be 100 per cent ac­cu­rate but in most cases they are able to re­solve cus­tomer queries on FAQs: “How do I can­cel?”, “What is the re­fund process?”, “I want to resched­ule my flight”, among oth­ers. What would it take for ma­chines to un­der­stand hu­man emo­tions? “More data and more ex­am­ples,” says Ama­zon’s Ras­togi. Deep learn­ing, where in­for­ma­tion is pro­cessed in stacks, changed that. Th­ese are com­plex mod­els which are com­pu­ta­tion­ally in­ten­sive to train and re­quire lots of data.


Around 50 per cent of all In­ter­net con­tent is in English, but only 20 per cent of the world reads English. That’s the great­est trans­la­tion chal­lenge for hu­man­ity. A decade ago, Google took up the chal­lenge with ‘Google Trans­late’ which has one bil­lion monthly ac­tive users, cov­er­ing 99 per cent of on­line pop­u­la­tion with al­most 10 bil­lion words trans­lated daily.

It may be bet­ter than other trans­la­tors, yet it flat­tered to de­ceive, es­pe­cially in Asian lan­guages where most of Google’s po­ten­tial new users lay. In Septem­ber 2016, Google Trans­late moved to AI based trans­la­tions be­gin­ning with Chi­nese to English. By Novem­ber 2016, it had ex­panded to 16 lan­guage pairs in 8 lan­guages—Korean, Ja­panese, Chi­nese, Turk­ish, Por­tuguese, Span­ish, Ger­man and French. Then eight In­dian lan­guages. Google mea­sured a 50-60 per cent im­prove­ment. In English-Korean, for in­stance, us­age shot up 75 per cent within five months of re­launch.

While the new sys­tem trans­lated to English and vice versa, it still could not trans­fer be­tween other lan­guages. “We have 103 lan­guages, we need 103 square mod­els to trans­late. That’s a lot and can’t be done, even by Google,” says Google’s Se­nior Staff Re­search Sci­en­tist Mike Schus­ter.

Google sci­en­tists’ so­lu­tion: use English as an in­ter­me­di­ate lan­guage to trans­late be­tween non-English lan­guages. It trained the sys­tem putting lan­guage pairs into a sin­gle model, in­di­cat­ing the tar­get lan­guage. “We find some of the lan­guages are di­rectly trans­lated, al­though this sys­tem has never seen ex­am­ples of Ja­panese to Korean or Korean to Ja­panese,” says Google’s Schus­ter. It helps Google Pixel ear­buds trans­late 40 lan­guages in real time. But prob­lems re­main. Names, num­bers and dates are far from ac­cu­rate. Google reg­is­tered an 1800 per cent growth in In­dian lan­guage trans­la­tions on mo­bile. In­di­ans are among the most ac­tive in feed­back and cor­rec­tions. “We re­ceive over 10 mil­lion con­tri­bu­tions from more than 5,00,000 In­di­ans to im­prove trans­la­tion,” says Schus­ter.

Ama­zon has taken up the chal­lenge to en­able real time trans­la­tion in dif­fer­ent lan­guages. An­other area is voice recog­ni­tion: train­ing Ama­zon’s voice as­sis­tant Alexa in lan­guages other than English. Voice recog­ni­tion is be­ing done us­ing AI, tak­ing hu­man voice, trans­lat­ing into text, fig­ur­ing out (mu­sic) re­quest, then con­vert­ing speech to text and play­ing the song. “At some point Alexa will even have con­ver­sa­tions,” says Ama­zon’s Ras­togi. Last year, Ama­zon launched a voice recog­ni­tion ca­pa­bil­ity ‘ Lex’ and an­other ser­vice ‘ Poly’ for test-to-speech ser­vice, and a third ser­vice ‘Recog­ni­tion’ for vis­ual sim­i­lar­i­ties for im­age.

Per­sonal dig­i­tal as­sis­tant such as Ap­ple’s Siri, Mi­crosoft’s Cor­tana, Ama­zon’s Alexa and Google’s As­sis­tant are all in early stages of AI. “We’re on a path where de­vices will be­come more and more trans­par­ent, more and more nat­u­ral, in their in­ter­ac­tions,” says Qual­comm’s Gehlhaar.

For travel por­tals, the call cen­tre at times could be up to 50 per cent of over­all costs. If they could be par­tially more


ef­fi­cient, it’s a di­rect con­tri­bu­tion to the bot­tom­line. Ban­ga­lore-based Tre­dence is do­ing just that for a global travel and tick­et­ing por­tal.

Its al­go­rithm takes voice sam­ples of a con­ver­sa­tion ev­ery sec­ond, pro­cesses it real time for sen­ti­ment anal­y­sis via nat­u­ral lan­guage in­ter­face and de­cides whether the dis­cus­sion is sat­is­fy­ing or de­te­ri­o­rat­ing. Ac­cord­ingly, the sys­tem de­cides whether and who to es­ca­late the call to. “Real time es­ti­ma­tion is still years away but within 30 min­utes, or at least within the day a res­o­lu­tion is def­i­nite. It does that by es­ca­lat­ing to the right au­thor­i­ties and not wait­ing for the call cen­tre em­ploy­ees’ judg­ment,” says Shashank Dubey, co-Founder Tre­dence.

Tre­dence and the travel por­tal are now us­ing the same method­ol­ogy to em­bed in­tel­li­gence in voice recog­ni­tion and work­ing on an in­tel­li­gent chat­bot. The chat­bot and IVR com­bined can then di­vert as much traf­fic away from call cen­tre and into an au­to­mated res­o­lu­tion sys­tem with­out im­pact­ing cus­tomer sat­is­fac­tion.

The world of AI, how­ever, is ex­pand­ing at a rapid pace. And ef­forts are aimed at tar­get­ing prob­lems, from fun­da­men­tal to cos­metic. Google’s AI di­vi­sion, for in­stance, uses a new tech­nique called WaveNet to make the vir­tual as­sis­tant’s voice sound more hu­man like. WaveNet uses ac­tual speech to train the neu­ral net­work. Then it uses statis­tics to gen­er­ate more nat­u­ral wave forms.


Ma­chines need to be trained with enor­mous amount of data for them to churn out the kind of re­sults ex­pected of them. Ama­zon, for in­stance, has been us­ing ma­chine learn­ing al­go­rithms to make prod­uct rec­om­men­da­tions since the 90s. Both Google and Ama­zon use ma­chine learn­ing ex­ten­sively to un­der­stand cus­tomer pref­er­ences but, more im­por­tantly, for ad­ver­tis­ing. Specif­i­cally, which ad­ver­tis­ments to show to which cus­tomer.

When Google switched Google Trans­late to neu­ral net­works, for just English to French it re­quired 2 bil­lion sen­tence pairs to train. “The train­ing time is 2-3 weeks for one lan­guage pair for one model,” says Google’s Mike Schus­ter.

Ma­ni­pal was lucky. Wat­son’s on­col­ogy ca­pa­bil­i­ties came pre-trained at one of the world’s best known can­cer hos­pi­tals—the New York-based Memo­rial Sloan Ket­ter­ing Can­cer Cen­ter.

But when Google be­gan work­ing with Aravind Eye Hos­pi­tals and Sankara Nethralaya, it gath­ered as many as 1.3 lakh im­ages graded by 54 opthal­mol­o­gists who had ren­dered 880 di­ag­noses for those im­ages. Google fed that into its neu­ral net­work.

“We re­trained it to iden­tify di­a­betic retinopa­thy. It does the five point grad­ing and it also gives a reading on how good the im­age is, is it grad­able?” says Google’s Lily Peng.

In the US, AI is be­ing trained to iden­tify mob thieves who en­ter re­tail stores to steal items and leave as a mob. For new prod­uct in­tro­duc­tions, cam­eras are be­gin­ning to iden­tify hu­man emo­tions to a new prod­uct to as­sess whether a prod­uct would do well or not.


Fu­ture com­put­ing work­loads will arise from bio­met­rics, track­ing ob­jects for IP cam­eras, and AR/VR ap­pli­ca­tions, be­sides IoT de­vices that will talk to each other all the time. One of the great­est chal­lenges of all forms of AI is that it can only be pro­cessed on the servers. If com­put­ing threat­ens to be the bot­tle­neck in AI mass adop­tion, how about dis­tribut­ing that com­put­ing be­tween the cloud servers and de­vices! Wide­spread adop­tion re­quires at least a part of the abil­ity, if not all, to re­side at the ter­mi­nal end—phones, lap­tops, tablets and other de­vices of the fu­ture such as wear­ables. That may be some time away.

For Google Trans­late, for in­stance, right now all the trans­la­tion hap­pens on the server, not on the phone. But in smart­phones, nat­u­ral lan­guage pro­cess­ing, fin­ger print scan, and im­age recog­ni­tion ca­pa­bil­i­ties are al­ready be­ing han­dled by the de­vice.

Yet, that’s not go­ing to be pos­si­ble where high-per­for­mance com­put­ing is re­quired. An au­ton­o­mous car, for in­stance, needs to be con­nected to the cloud, but for it to be fully au­ton­o­mous, ev­ery­thing it needs to op­er­ate should be fully on board. And more is re­quired. En­gi­neer­ing gi­ant ABB is de­vel­op­ing col­lab­o­ra­tive dash­boards for CEOs, or a com­pany’s opex man­agers, where it’s able to ag­gre­gate key per­for­mance in­di­ca­tors of all its plants at the cor­po­rate of­fice on real time. “They would like to see the KPIs (key per­for­mance in­di­ca­tors) of all the plants in their cor­po­rate of­fice and then take de­ci­sions, do the fleet man­age­ment and op­ti­mise the as­sets. They don’t have to keep lot of in­ven­tory of spares,” says ABB In­dia’s Chief Tech­nol­ogy Of­fi­cer Ak­ilur Rah­man.

Qual­comm is work­ing on com­pres­sion tech­niques while other tech firms are work­ing on other tech­nolo­gies to pro­vide high per­for­mance net­works. But Qual­comm also sees an op­por­tu­nity in de­vices work­ing in tan­dem to cre­ate higher pro­cess­ing ca­pa­bil­ity. For in­stance, de­vices in­ter­con­nected in a home could all work to­gether to cre­ate a joint model.

“All of our big part­ners such as Face­book, Google are in­ter­ested in mov­ing the work­load to the de­vice. In mar­kets like In­dia where con­nec­tiv­ity is not so good, they want to pro­vide the best ex­pe­ri­ence on the de­vice,” says Gehlhaar.

You would imag­ine AI as a com­pletely self aware de­vice. But we’re nowhere near pro­duc­ing those kind of de­vices just yet. Yet tech firms are work­ing on, for in­stance, fully on de­vice ma­chine learn­ing pow­ered se­cu­rity, or bio­met­ric; fully on de­vice of nat­u­ral lan­guage in­ter­faces; fully on de­vice for photo en­hance­ment. “Th­ese things are very achiev­able. But they’re a long stretch from sci-fi AI,” says Gehlhaar.

Given the enor­mity of the chal­lenge, at least some of the com­pe­ti­tion is giv­ing way to co-ope­ti­tion to avoid rein­vent­ing the wheel. Last month, Ama­zon Web Ser­vices and Mi­crosoft com­bined their deep learn­ing abil­i­ties into a li­brary called Gluon which will let de­vel­op­ers build ma­chine learn­ing mod­els and apps on the plat­form.

Many of the things that are most im­por­tant are ac­tu­ally quite sim­ple to un­der­stand. For ex­am­ple, AI needs far, far bet­ter and pre­cise data. Some of the data to­day is com­plete junk. “Let’s be clear, We’ve not solved AI. We’ve not cre­ated in­tel­li­gence of hu­mans. We’re not there,” says Ama­zon’s Ras­togi.

Those are just the rea­sons why the hu­man brain re­mains the great­est in­spi­ra­tion for AI. But sci­ence still has lit­tle clue about how the brain func­tions. More im­por­tantly, how it learns. If it did, that would be the eas­i­est ‘ tech’ to adapt in AI ma­chines. That is, pro­vided hu­mans—bar­ring a rogue Franken­stein—would ever cede con­trol over ma­chines! Un­til that’s a re­al­ity, the quest goes on.


Aravind Eye Hos­pi­tals and Sankara Nethralaya use Google’s Ten­sor Flow tools to di­ag­nose di­a­betic retinopa­thy early

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.