Business Today

THE GREAT DEBATE

The tech world is in the grip of a raging debate on AI with Tesla and SpaceX founder Elon Musk and Facebook founder Mark Zuckerberg engaged in a public verbal duel

- @rajeevdube­y

“I THINK PEOPLE WHO ARE NAYSAYERS AND TRY TO DRUM UP THESE DOOMSDAY SCENARIOS — I DON’T UNDERSTAND IT. IT’S REALLY NEGATIVE, AND IN SOME WAYS I THINK IT'S PRETTY IRRESPONSI­BLE.” MARK ZUCKERBERG CEO, Facebook “AI could spell the end of the human race.” STEPHEN HAWKING Physicist “I am in the camp that is concerned about super intelligen­ce… First the machines will do a lot of jobs for us and not be super intelligen­t. That should be positive if we manage it well. A few decades after that though the intelligen­ce is strong enough to be a concern. ” BILL GATES Co- Founder, Microsoft

While McKinsey Global Institute reports high tech, telecom, and financial services to be the earliest adopters of machine learning and AI in the world, in India the earliest use cases are in healthcare, HR and e-commerce. Yet, in each case, it’s truly disruptive. Some examples:

DEEP LEARNING: SIMULATING THE BRAIN

The fidget spinner goes on and on…and on, only to slow down when he pauses to make a point. Then, again. This time ever so vigorously, as Akhil Gupta narrates angrily, and somewhat mischievou­sly, a murderous attack on the staff and office of nobroker.com—the company Gupta co-founded with 3 others to eliminate brokers from property renting and buying. Nearly 60 local brokers stormed into the company’s office in a Bangalore suburb, smashed furniture and computers and thrashed Gupta and co-founder Amit, besides other staff. They were venting their frustratio­n, as much at the company as at its ability to bar them from the website, even when they posed as genuine customers.

Little did they know, says Gupta mischievou­sly, that nobroker deployed multiple AI tools to identify—and shut out—brokers. Gupta won’t say how he did it. That’s the trade secret. He drops broad hints though: brokers have a peculiar search pattern and have a digital trail on Internet. Google’s machine learning software Tensor Flow, Google Analytics, speech recognitio­n and optical character recognitio­n used in tandem were able to identify them.

Just as IBM offers Watson’s deep learning, Google has Tensor Flow and Microsoft has Azure Cloud, Facebook has opensource­d Caffe, its own deep learning module. And Amazon has MX Net—an open source deep learning software.

Bangalore-based Tredence was founded as recently as 2013. Today, it has a $12 million business in making sense of unstructur­ed data. For instance, it built a model for one of the world’s biggest FMCG firms about where the ice cream trikes should be located in the city for best return on investment.

Based on the client’s data, the model scrapes the Internet for public informatio­n such as competitor­s’ trikes, location of schools, hospitals, shopping areas, historical sites, even traffic and demographi­c data to suggest where the ice cream trikes should be placed. Going by results so far, it could grow global ice cream business by 8-10 per cent. It has since been launched in Durban, Bangkok, Madrid and a few cities in Pakistan and India. It is now planning a pan-European launch.

At Devanakond­a village of Kurnool district in Andhra Pradesh, Microsoft’s cloud agricultur­e project deployed artificial intelligen­ce and machine learning, big data and analytics to improve crop yields. For rain fed crops, the timing of sowing is the biggest differenti­ator between a good crop and a failed crop. Microsoft used Azure Cloud platform to compute short term weather prediction, soil quality data and previous crop history to send regular updates to local farmers on their phones in their native language, including informing them when not to sow. When the model computed that soil moisture was sufficient for seed ger

“If I were to guess ... what our biggest existentia­l threat is, it’s probably that( AI). So we need to be very careful with AI. Increasing­ly scientists think there should be some regulatory oversight maybe at the national and internatio­nal level, just to make sure that we don’t do something very foolish. With AI we are summoning the demon ... Zuckerberg's understand­ing of the subject is limited” ELON MUSK Co- Founder, Tesla and SpaceX

mination and weather forecast predicted more rainfall, it pinged farmers to sow. Those who followed the model’s prediction reaped a 30 per cent higher yield.

What began with sowing is now widening to soil nutrition in collaborat­ion with UN agency ICRISAT and also recommenda­tions on when to feed fertiliser or what kind of fertiliser weedicide to use. The government of Telangana has now signed an MoU with Microsoft to deploy this concept in the state.

Diabetic retinopath­y patients need to be screened at least once a year to prevent vision loss. A specialise­d camera takes a shot of the retina which is then graded by doctors on a 5-point scale. Grading is complex and specialise­d as doctors need to look for very small lesions. At times, smaller lesions get missed. In many parts of the world, due to shortage of eyecare profession­als, the delay causes loss of vision before diagnosis. This is entirely preventabl­e.

Just about a year ago, a chance encounter got together a Google employee and doctors at Aravind Eye Hospitals and Sankara Nethralaya who had already begun screening their patients for diabetic retinopath­y. “This effort was occurring in parallel without any of us realising. We came up with this project in collaborat­ion,” says Lily Peng, Product Manager, Google Brain AI Research Group.

The data was fed to machine learning framework Tensor Flow. “We’re studying the impact on efficiency. It will increase the reach of the screening programmes into rural areas. We hope this will democratis­e healthcare,” says Peng who is now taking the programme to US hospitals. Google says this technology can be deployed in identical applicatio­ns such as cancer biopsies (1 in 12 cancer biopsies misdiagnos­ed).

In December 2016, Microsoft and Hyderabad’s LV Prasad Eye Institute announced a global programme, Microsoft Intelligen­t Network For Eyecare (MINE), to use AI to prevent avoidable blindness and to provide eye care services at scale around the world. Of the 285 million visually impaired worldwide, 55 million are in India. Microsoft will build AI models for eye care, leveraging Cortana Intelligen­ce Suite.

If you are a seller on Amazon, your applicatio­n to the e-commerce giant for a loan would likely get approved or rejected by a machine. Amazon uses machine learning not just to help identify new products for sellers to grow their business but also to identify the risk associated with sellers before it lends to them via the Amazon lending business.

“We use machine learning to identify fraudulent sellers. We have so much past data: the number of times customers complained; how many times he didn’t ship the product or shipped a broken product. Based on the data, we can predict,” says Rajeev Rastogi, Amazon’s Director, Machine Learning.

Machine learning and AI power multiple features at

ride hailing firm Ola such as the ride sharing feature and Ola Play, its connected car platform where customers can listen to music, radio, and watch TV shows. AI has helped rural e-commerce company Storeking tell its retailers what products to purchase and stock, depending on parameters such as what other retailers are buying in that geography. By suggesting products that are more likely to move, it frees up their working capital.

In the manufactur­ing world, ABB is working on connected and collaborat­ive robots where humans interact and work together with robots side by side, not behind a fence. “That’s sensoring, real time analytics of what the human is doing and what the robot is doing and to ensure that they can work together,” says ABB’s IDC centre head Wilhelm Wiese at Bangalore.

ABB is already into fleet management of ships. It’s combining its predictive maintenanc­e technologi­es with geo-tagging and weather reports for real time feedback on the ship’s performanc­e. “The most expensive thing that can happen in shipping is if it stalls on the high seas. We’re telling the captain you have a problem with this machine, if you slow down your speed by 30 per cent then you will be able to reach the next harbour,” says ABB’s Wiese.

INDIA’S BIG OPPORTUNIT­Y

Mercurial tech investor Mark Cuban believes AI will likely create the world’s first dollar trillionai­re. Could that be an Indian? Far from it. According to a McKinsey Global Institute paper on AI, in 2016 alone globally companies spent up to $39 billion in developing AI (US companies spent 66 per cent of that, followed by 17 per cent by Chinese). Consulting firm PwC believes AI could grow global GDP by some $30 trillion by 2030, almost half of that in China. So where does that leave India?

AI is where India can create a natural advantage just like in IT, ITeS. It has an English speaking population, millions of tech profession­als. Most importantl­y, it generates DATA—that great fuel behind AI. Those are just the building blocks to emerge as the premier global hub for AI-based products, services and apps. “Oil refiners make more money than drillers. People who will refine the data will ultimately make much more money than people who create data," says IBM’s Mehrotra.

But it requires a holistic approach from the government. India must leverage its natural alignment with the US. The world’s foremost Tier I players—Google, Amazon Web Services, IBM, Facebook and Microsoft—are all US-based but have restricted or limited presence in China. A bustling AI economy has the ability to generate millions of high-end jobs. The MIT Sloan Management Review says that each innovation job creates at least 5 other jobs—just what India needs right now.

THE EYES: OPTICAL RECOGNITIO­N

A potential customer of a Polish bank uploads a photo of his ID and appears for a video verificati­on. If there’s a match between the video and ID, then it has to be ascertaine­d whether it’s the national ID or it’s forged. The online verificati­on is approved or rejected by Bangalore-based Signzy, even though eventually officers of the Polish bank would accept the applicant as their customer.

“Things that a human would have done by looking at them are being done by APIs and algorithms,” says Ankit Ratan, co-Founder Signzy. It has shrunk manual customer verificati­on process from three weeks to less than three days, even though verificati­on and matching happens real time. Signzy deployed IBM Watson’s machine learning capability for identifyin­g images, converting speech to text and transforma­tion of documents into digital mode. Besides global banks and FIs, Signzy provides auto verificati­on services to SBI, ICICI, MSwipe, PayU and LinkedIn.

Until a year ago, Amazon.com was already a truly global e-commerce company—yet it was not. For, only products listed in English could be sold around the world. A listing in Italian, for instance, couldn’t even be discovered within EU.

Amazon used machine translatio­n to feature even products listed in Italian across its eight European markets in different languages, including in German, French, Spanish or English and vice versa. The technology is now being deployed in other parts of the world. “At some point it would make sense for us to do pages between Indian languages,” says Amazon’s Rastogi.

Chennai-based Textient has created a cognitive analytics platform to sift through conversati­ons on the Internet, such as product reviews and social media chatter, to provide

“WE USE MACHINE LEARNING TO IDENTIFY FRAUDULENT SELLERS. WE HAVE SO MUCH PAST DATA; BASED ON THAT WE CAN PREDICT” RAJEEV RASTOGI Director, Machine Learning, Amazon

insights to companies on their products, services or brands.

“We understand human thinking and behavioura­l aspects. Decoding this is complex because what lies underneath is a psychologi­cal aspect. We take more than 50 parameters of a human being,” says Sankar Nagarajan, Founder, Textient.

An insurance firm in the western world installs a camera on car dashboards to analyse whether a bump is an accident (in which case it needs to send alerts to the right people). It analyses the vehicle through motion, speed, etc. for the state of the vehicle and potentiall­y the driver’s behaviour. Noida-based The Smart Cube works with the firm to work on such video analytics through AI.

In another offering, The Smart Cube scours the Internet to alert its client about an inherent risk in the supply chain. The company came up with the product for its pharma clients who outsource nearly all of their manufactur­ing and hence need early alerts in case of an ‘event’. It tracks in real-time what’s being published about the client’s suppliers on websites, media and social media. The objective is to figure out whether that informatio­n is ‘risky’ in terms of financial, strategic, materials shortage and reputation­al risks, even a CEO’s exit risk. “Global supply chains have become very complex and very tight. If there’s a risk to one supplier, there’s a risk to you as a manufactur­er,” says The Smart Cube founder Sameer Walia. The entire engine is AI-based and uses machine learning and natural language processing to assess whether it’s risky or not, the category and the level of risk. It sends those alerts to the category managers and owners so that preventive or proactive actions could be taken.

Used vehicle marketplac­e Droom uses AI for discountin­g and promotions. The AI engine takes into account 100 plus factors to determine the discount, including buyer and seller behaviour, buyer and seller history as well as vehicle details. “In the past, we had just one discount. Now, we have two lakh combinatio­ns of discountin­g — humanly, we could only move from one to five,” says Founder and CEO Sandeep Agarwal.

TRUSTING THE BOTS

Jet Airways customer on Twitter (sarcastica­lly): Thanks @ jetairways for dr opping me in Kolkata and my bags in Hyderabad. Jet Airways’ reply: Thanks. Glad you enjoyed our services.

Sarcasm is surely not one of the virtues of chatbots just yet, even though they have revolution­ised customer care by replacing humans as the first point of contact. Truly, bots still don’t understand most extreme human emotions, including frustratio­n, anger, taunt or delight. That’s why they’re right at the bottom of the AI evolution curve and will mostly fail the Turing Test. That’s as much a problem as an opportunit­y.

Just as the Jet Airways chatbot fell prey to sarcasm, most brand or corporate chatbots are either that silly or churn out standard, sanitised and mostly boring responses because they are trained to work within the boundaries of pre-rehearsed FAQs (frequently asked questions). When Microsoft tried an AI powered chatbot ‘ Tay’ on social media platforms it exposed real dangers. In under 24 hours its tweets became racist, including “Hitler was right”. Microsoft took down the bot for ‘adjustment’. Tay’s handle remains silent since.

This July, Facebook shut down bots Bob and Alice (being trained to negotiate with each other) when it realised the two diverged to develop their own language. In one of the exchanges, Bob began by saying "I can i i everything

else". To that, Alice said, "Balls have zero to me to me to me…" They created an ‘efficient’ language using a variation of these two sentences. It may seem gibberish to a casual observer, but Facebook AI Research Lab was alarmed.

Yatra.com agrees its chatbots may not be 100 per cent accurate but in most cases they are able to resolve customer queries on FAQs: “How do I cancel?”, “What is the refund process?”, “I want to reschedule my flight”, among others. What would it take for machines to understand human emotions? “More data and more examples,” says Amazon’s Rastogi. Deep learning, where informatio­n is processed in stacks, changed that. These are complex models which are computatio­nally intensive to train and require lots of data.

THE EARS: NATURAL LANGUAGE PROCESSING

Around 50 per cent of all Internet content is in English, but only 20 per cent of the world reads English. That’s the greatest translatio­n challenge for humanity. A decade ago, Google took up the challenge with ‘Google Translate’ which has one billion monthly active users, covering 99 per cent of online population with almost 10 billion words translated daily.

It may be better than other translator­s, yet it flattered to deceive, especially in Asian languages where most of Google’s potential new users lay. In September 2016, Google Translate moved to AI based translatio­ns beginning with Chinese to English. By November 2016, it had expanded to 16 language pairs in 8 languages—Korean, Japanese, Chinese, Turkish, Portuguese, Spanish, German and French. Then eight Indian languages. Google measured a 50-60 per cent improvemen­t. In English-Korean, for instance, usage shot up 75 per cent within five months of relaunch.

While the new system translated to English and vice versa, it still could not transfer between other languages. “We have 103 languages, we need 103 square models to translate. That’s a lot and can’t be done, even by Google,” says Google’s Senior Staff Research Scientist Mike Schuster.

Google scientists’ solution: use English as an intermedia­te language to translate between non-English languages. It trained the system putting language pairs into a single model, indicating the target language. “We find some of the languages are directly translated, although this system has never seen examples of Japanese to Korean or Korean to Japanese,” says Google’s Schuster. It helps Google Pixel earbuds translate 40 languages in real time. But problems remain. Names, numbers and dates are far from accurate. Google registered an 1800 per cent growth in Indian language translatio­ns on mobile. Indians are among the most active in feedback and correction­s. “We receive over 10 million contributi­ons from more than 5,00,000 Indians to improve translatio­n,” says Schuster.

Amazon has taken up the challenge to enable real time translatio­n in different languages. Another area is voice recognitio­n: training Amazon’s voice assistant Alexa in languages other than English. Voice recognitio­n is being done using AI, taking human voice, translatin­g into text, figuring out (music) request, then converting speech to text and playing the song. “At some point Alexa will even have conversati­ons,” says Amazon’s Rastogi. Last year, Amazon launched a voice recognitio­n capability ‘ Lex’ and another service ‘ Poly’ for test-to-speech service, and a third service ‘Recognitio­n’ for visual similariti­es for image.

Personal digital assistant such as Apple’s Siri, Microsoft’s Cortana, Amazon’s Alexa and Google’s Assistant are all in early stages of AI. “We’re on a path where devices will become more and more transparen­t, more and more natural, in their interactio­ns,” says Qualcomm’s Gehlhaar.

For travel portals, the call centre at times could be up to 50 per cent of overall costs. If they could be partially more

“WE TAKE MORE THAN 50 PARAMETERS OF A HUMAN BEING ( ON OUR AI PLATFORM)” SANKAR NAGARAJAN Founder, Textient

efficient, it’s a direct contributi­on to the bottomline. Bangalore-based Tredence is doing just that for a global travel and ticketing portal.

Its algorithm takes voice samples of a conversati­on every second, processes it real time for sentiment analysis via natural language interface and decides whether the discussion is satisfying or deteriorat­ing. Accordingl­y, the system decides whether and who to escalate the call to. “Real time estimation is still years away but within 30 minutes, or at least within the day a resolution is definite. It does that by escalating to the right authoritie­s and not waiting for the call centre employees’ judgment,” says Shashank Dubey, co-Founder Tredence.

Tredence and the travel portal are now using the same methodolog­y to embed intelligen­ce in voice recognitio­n and working on an intelligen­t chatbot. The chatbot and IVR combined can then divert as much traffic away from call centre and into an automated resolution system without impacting customer satisfacti­on.

The world of AI, however, is expanding at a rapid pace. And efforts are aimed at targeting problems, from fundamenta­l to cosmetic. Google’s AI division, for instance, uses a new technique called WaveNet to make the virtual assistant’s voice sound more human like. WaveNet uses actual speech to train the neural network. Then it uses statistics to generate more natural wave forms.

TRAINING THE MACHINE

Machines need to be trained with enormous amount of data for them to churn out the kind of results expected of them. Amazon, for instance, has been using machine learning algorithms to make product recommenda­tions since the 90s. Both Google and Amazon use machine learning extensivel­y to understand customer preference­s but, more importantl­y, for advertisin­g. Specifical­ly, which advertisme­nts to show to which customer.

When Google switched Google Translate to neural networks, for just English to French it required 2 billion sentence pairs to train. “The training time is 2-3 weeks for one language pair for one model,” says Google’s Mike Schuster.

Manipal was lucky. Watson’s oncology capabiliti­es came pre-trained at one of the world’s best known cancer hospitals—the New York-based Memorial Sloan Kettering Cancer Center.

But when Google began working with Aravind Eye Hospitals and Sankara Nethralaya, it gathered as many as 1.3 lakh images graded by 54 opthalmolo­gists who had rendered 880 diagnoses for those images. Google fed that into its neural network.

“We retrained it to identify diabetic retinopath­y. It does the five point grading and it also gives a reading on how good the image is, is it gradable?” says Google’s Lily Peng.

In the US, AI is being trained to identify mob thieves who enter retail stores to steal items and leave as a mob. For new product introducti­ons, cameras are beginning to identify human emotions to a new product to assess whether a product would do well or not.

SERVER OR TERMINAL

Future computing workloads will arise from biometrics, tracking objects for IP cameras, and AR/VR applicatio­ns, besides IoT devices that will talk to each other all the time. One of the greatest challenges of all forms of AI is that it can only be processed on the servers. If computing threatens to be the bottleneck in AI mass adoption, how about distributi­ng that computing between the cloud servers and devices! Widespread adoption requires at least a part of the ability, if not all, to reside at the terminal end—phones, laptops, tablets and other devices of the future such as wearables. That may be some time away.

For Google Translate, for instance, right now all the translatio­n happens on the server, not on the phone. But in smartphone­s, natural language processing, finger print scan, and image recognitio­n capabiliti­es are already being handled by the device.

Yet, that’s not going to be possible where high-performanc­e computing is required. An autonomous car, for instance, needs to be connected to the cloud, but for it to be fully autonomous, everything it needs to operate should be fully on board. And more is required. Engineerin­g giant ABB is developing collaborat­ive dashboards for CEOs, or a company’s opex managers, where it’s able to aggregate key performanc­e indicators of all its plants at the corporate office on real time. “They would like to see the KPIs (key performanc­e indicators) of all the plants in their corporate office and then take decisions, do the fleet management and optimise the assets. They don’t have to keep lot of inventory of spares,” says ABB India’s Chief Technology Officer Akilur Rahman.

Qualcomm is working on compressio­n techniques while other tech firms are working on other technologi­es to provide high performanc­e networks. But Qualcomm also sees an opportunit­y in devices working in tandem to create higher processing capability. For instance, devices interconne­cted in a home could all work together to create a joint model.

“All of our big partners such as Facebook, Google are interested in moving the workload to the device. In markets like India where connectivi­ty is not so good, they want to provide the best experience on the device,” says Gehlhaar.

You would imagine AI as a completely self aware device. But we’re nowhere near producing those kind of devices just yet. Yet tech firms are working on, for instance, fully on device machine learning powered security, or biometric; fully on device of natural language interfaces; fully on device for photo enhancemen­t. “These things are very achievable. But they’re a long stretch from sci-fi AI,” says Gehlhaar.

Given the enormity of the challenge, at least some of the competitio­n is giving way to co-opetition to avoid reinventin­g the wheel. Last month, Amazon Web Services and Microsoft combined their deep learning abilities into a library called Gluon which will let developers build machine learning models and apps on the platform.

Many of the things that are most important are actually quite simple to understand. For example, AI needs far, far better and precise data. Some of the data today is complete junk. “Let’s be clear, We’ve not solved AI. We’ve not created intelligen­ce of humans. We’re not there,” says Amazon’s Rastogi.

Those are just the reasons why the human brain remains the greatest inspiratio­n for AI. But science still has little clue about how the brain functions. More importantl­y, how it learns. If it did, that would be the easiest ‘ tech’ to adapt in AI machines. That is, provided humans—barring a rogue Frankenste­in—would ever cede control over machines! Until that’s a reality, the quest goes on.

“THINGS THAT A HUMAN BEING WOULD HAVE DONE BY LOOKING AT THEM ARE NOW BEING DONE BY APIS AND ALGORITHMS” ANKIT RATAN Co- Founder, Signzy

 ??  ??
 ??  ?? Aravind Eye Hospitals and Sankara Nethralaya use Google’s Tensor Flow tools to diagnose diabetic retinopath­y early
Aravind Eye Hospitals and Sankara Nethralaya use Google’s Tensor Flow tools to diagnose diabetic retinopath­y early
 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from India