into the neu­ral depths

Philip M Parker, INSEAD, breaks down the nu­ances of the con­cept of deep learn­ing.

The Smart Manager - - Contents - PHILIP M PARKER IS INSEAD PRO­FES­SOR OF MAR­KET­ING AND THE SCHOOL’S CHAIRED PRO­FES­SOR OF MAN­AGE­MENT SCI­ENCE.

When you asked Siri to look for the near­est restau­rant, did you ever won­der how it ac­tu­ally worked? How could it rec­og­nize your speech and ful­fill the task? The an­swer lies in ‘deep learn­ing,’ which multi­na­tional com­pa­nies such as Google and Mi­crosoft are heav­ily in­vest­ing in. In an ex­clu­sive in­ter­view with The Smart Man­ager, Philip M Parker ex­plores the nu­ances of this tech­nol­ogy and how it is in­creas­ingly dis­rupt­ing busi­nesses.

How is deep learn­ing tech­nol­ogy go­ing to af­fect busi­nesses world­wide?

‘Deep learn­ing’ is a layer that goes on top of other big data ap­pli­ca­tions. If there is a big data ware­house con­tain­ing petabytes of data com­pris­ing im­ages, text, and voice files among other things, deep learn­ing is a layer that goes on top of that which al­lows you to ex­tract knowl­edge or cre­ate new knowl­edge; and that is the point of deep learn­ing. This tech­nol­ogy is not new—it has been in prac­tice since the early 1970s, with the ad­vent of neu­ral net­works.

Neu­ral net­works are al­go­rithms that use back prop­a­ga­tion, which was de­vel­oped in the mid-1970s to make in­fer­ences from data and rec­og­nize pat­terns. Its ear­li­est ap­pli­ca­tions were in par­ti­cle physics wherein peo­ple had to process and look at im­ages of par­ti­cles col­lid­ing with each other. Neu­ral net­works were de­vel­oped to look at these im­ages and de­cide whether real phys­i­cal events oc­curred. Rather than ask­ing doc­toral stu­dents to look at these im­ages, those at Stan­ford Lin­ear Ac­cel­er­a­tor Cen­ter were us­ing al­go­rithms for im­age pat­tern recog­ni­tion.

Most of the early work in this area was largely in the field of sci­ence, com­puter pro­gram­ming, physics, etc. Its ap­pli­ca­tion in busi­ness is a rel­a­tively new phe­nom­e­non. We are at the be­gin­ning of the life cy­cle of this con­cept and there are only a few com­pa­nies that are ac­tively en­gaged in this. Many are en­gaged in big data but they do not have the ex­per­tise to use Ar­ti­fi­cial In­tel­li­gence (AI) as a layer on top of it, and deep learn­ing as an­other layer.

can you elab­o­rate on the three dif­fer­ent lay­ers?

There are three lay­ers—a layer of big data which com­prises mas­sive data ware­houses, a layer of deep learn­ing, and a layer on top of it—au­thor­ing layer—wherein af­ter learn­ing some­thing, some­thing orig­i­nal is au­thored. I am particularly work­ing on the au­thor­ing layer. This con­cept was prob­a­bly pi­o­neered here at INSEAD, and also at MIT by Pro­fes­sor John Lit­tle who no­ticed that many were get­ting scan­ner data from op­ti­cal scan­ners but they sim­ply did not have the time to an­a­lyze it. Al­go­rithms were used to de­tect events in the data and then a memo was sent to mar­ket­ing man­agers, let­ting them know of any ‘news’ in it. It lit­er­ally wrote news ar­ti­cles for man­agers, and right now we are wit­ness­ing the ap­pli­ca­tion of the au­thor­ing lay­ers in a num­ber of do­mains. We can­not look at deep learn­ing in iso­la­tion, since with­out data we do not have deep learn­ing, and with­out the au­thor­ing layer you do not get much value out of deep learn­ing.

It was in the 1980s and the early 1990s that we wit­nessed the first ap­pli­ca­tion of all the three lay­ers in the field of busi­ness. Deep learn­ing is a col­lec­tion of dif­fer­ent al­go­rithms that peo­ple use to make data rep­re­sen­ta­tions, and to make them au­to­mat­i­cally with­out in­volv­ing la­bor. Ear­lier, peo­ple had to phys­i­cally tag pho­tographs but with deep learn­ing today, we have com­puter pro­grams that mimic hu­man be­hav­ior and learn as they rec­og­nize more and more pat­terns.

The prob­lems that busi­nesses face today are too vast to be solved through la­bor. The most ef­fec­tive way is to use big data with deep learn­ing and au­to­mated au­thor­ing.

Au­to­mated au­thor­ing is un­doubt­edly grow­ing rapidly. Right now, weather fore­casts are avail­able in 120 lan­guages be­cause of the ap­pli­ca­tion of the au­thor­ing layer. Most of the world’s lan­guages did not have weather fore­casts at the ru­ral level or in the lo­cal dialect, un­til we cre­ated it. It has many ap­pli­ca­tion ar­eas—what­ever you think a hu­man be­ing could po­ten­tially au­thor, so could a com­puter al­go­rithm which has been trained with the deep learn­ing al­go­rithm. My web­site (to­topo­etry.com) uses al­go­rithms to write po­etry as well as to edit and fine-tune them. It first pro­duced 4.5 mil­lion poems which got edited by an­other com­puter pro­gram to 1.4 mil­lion, based on qual­ity. This al­go­rith­mic au­thor­ing has ap­pli­ca­tions in many do­mains such as med­i­cal care (wherein a pa­tient can type in symp­toms and get a di­ag­no­sis done), crop care, and

The prob­lems that busi­nesses face today are too vast to be solved through la­bor. The most ef­fec­tive way is to use big data with deep learn­ing and au­to­mated au­thor­ing.

live­stock care. I re­ceived a patent for this tech­nol­ogy from the US Patent Office back in 2000. It is, how­ever, new and is not yet preva­lent across or­ga­ni­za­tions world­wide. I would say less than 1/10 of one per cent of com­pa­nies are us­ing this. How­ever, the firm that I started—the Icon Group—has pub­lished over one mil­lion books writ­ten fully by com­puter pro­grams and these are dis­trib­uted on Ama­zon.com.

This re­liance on big data and re­lated tech­nolo­gies is a nat­u­ral pro­gres­sion for com­pa­nies that pos­sess the skills to ex­ploit data and/or the re­quire­ment to do so. So it is nat­u­ral for Ama­zon to rec­om­mend the next movie or an ecom­merce player to rec­om­mend more prod­ucts—this is what sales­per­sons have been do­ing in stores for a hun­dred years. They no­tice what you buy and then make a rec­om­men­da­tion. Rec­om­men­da­tion en­gines of­ten rely on a layer of deep learn­ing. The next layer on top of that will be the au­thor­ing layer, which would ac­tu­ally have an ar­ti­fi­cial 3D sales­per­son help­ing and guid­ing users in their shop­ping ex­pe­ri­ence.

Deep learn­ing is an ap­pli­ca­tion area of AI within which there are other ap­pli­ca­tions peo­ple are work­ing on, such as im­age, voice, and fa­cial recog­ni­tion; rec­om­men­da­tion en­gines; track­ing crim­i­nals and crim­i­nol­ogy; and writ­ing new books or au­thor­ing orig­i­nal con­tent.

how is deep learn­ing dif­fer­ent from AI?

We need to see AI as a col­lec­tion of var­i­ous pro­grams. It is any­thing that can mimic an in­tel­li­gent be­ing. A pocket cal­cu­la­tor that will solve ba­sic math­e­mat­i­cal prob­lems is also a form of AI. It is faster than a hu­man be­ing and might be more ac­cu­rate too. It is pro­grammed to give an­swers to ques­tions. How­ever, deep learn­ing does not be­long to this cat­e­gory of AI. It is an ap­pli­ca­tion area of AI within which there are other ap­pli­ca­tions peo­ple are work­ing on, such as im­age, voice, and fa­cial recog­ni­tion; rec­om­men­da­tion en­gines; track­ing crim­i­nals and crim­i­nol­ogy; and writ­ing new books or au­thor­ing orig­i­nal con­tent. Most of the lan­guages in the world do not have rec­om­men­da­tions on how a farmer should grow crops. I am work­ing on a project where we used an au­thor­ing en­gine on top of the deep learn­ing en­gine to come up with [in­for­ma­tion on] op­ti­mal crops, weather fore­casts, etc. These are the ap­pli­ca­tion ar­eas within deep learn­ing.

Deep learn­ing is about rep­re­sent­ing the ex­ist­ing data, find­ing a pat­tern, and then ex­trap­o­lat­ing from that pat­tern. Find­ing a pat­tern in data is a sim­ple thing to do. Nat­u­ral lan­guage pars­ing is a part of deep learn­ing. Take a sen­tence like ‘Ge­orge Wash­ing­ton was the first Pres­i­dent of the United States’; the word ‘was’ is a magic pars­ing word in deep learn­ing. That one state­ment can an­swer two ques­tions—‘who was Ge­orge Wash­ing­ton?’ and ‘who is the first Pres­i­dent of the United States?’ Deep learn­ing al­go­rithms run through bil­lions of sen­tences that have ever been pub­lished in sci­ence and tech­nol­ogy and ex­tracts knowl­edge, there­fore be­ing able to an­swer ques­tions.

The in­ter­est­ing part is whether it can au­thor some­thing new since it knows ev­ery­thing. For ex­am­ple, some day there could be a cure for malaria and there will be a sen­tence—the cure for malaria is ‘some­thing’. Deep learn­ing, on top of au­thor­ing, al­lows it to ac­tu­ally write— ‘the most likely cure for malaria will be.’ So that is the next fron­tier. Com­pa­nies such as IBM and Mi­crosoft are work­ing in this area where we not just rec­og­nize photos but also say, “now that I know what the pho­tographs are, I can tell where a plant dis­ease is likely to spread the

fastest.” It is about com­ing up with orig­i­nal in­sights and con­clu­sions that hu­man be­ings sim­ply would not have the time to an­a­lyze. In ad­di­tion, deep learn­ing al­go­rithms have been used for sim­ple things such as play­ing a game of chess. It learns the op­po­nent’s weak­nesses and then comes up with op­ti­mal moves, sim­i­lar to rules-based al­go­rithms which get bet­ter when trained more.

which are the in­dus­tries where deep learn­ing will have a significant im­pact, and how?

Deep learn­ing tech­nol­ogy will have the most im­pact in im­age and voice recog­ni­tion. It will also be used in ar­eas such as crim­i­nol­ogy, and in the phar­ma­ceu­ti­cal in­dus­try for dis­cov­er­ing new com­pounds and drugs. It will also be in­stru­men­tal in de­vel­op­ing next-gen­er­a­tion search en­gines. Today, search en­gines use the legacy tech­nol­ogy of spi­der­ing the whole in­ter­net and dis­cov­er­ing the best in­for­ma­tion on some­thing you type. How­ever, with data grow­ing so fast there will not be enough hard disc space to store it. So deep learn­ing will have to re­place the cur­rent search en­gine tech­nol­ogy.

I am work­ing on a project (to­toGEO) funded by Bill and Melinda Gates Foun­da­tion that is uti­liz­ing deep learn­ing tech­niques to bridge the con­tent di­vide, and au­thor­ing new con­tent and ed­u­ca­tional ma­te­ri­als in un­der­served lan­guages. It in­volves nat­u­ral lan­guage pro­cess­ing that goes through bil­lions of text, learn from it, and then re­pro­duce it in all of the world’s lan­guages. Right now, there is a ma­jor con­tent di­vide be­tween peo­ple who gen­er­ate con­tent and knowl­edge, and those who ac­tu­ally might need it the most. A book­store in the US has a big self-help sec­tion whereas if you go to a French book­store, there is a smaller self-help sec­tion be­cause there are less num­ber of peo­ple who speak French and hence it is not eco­nom­i­cal for the pub­lisher to pub­lish books in French. As a lan­guage gets for­ward, such as the 19 lan­guages of In­dia, for ex­am­ple, the less is pub­lished within those lan­guages. If you type a word like ‘molecule’ or ‘macro-

molecule’ in Hindi, there will not be any re­sults on the in­ter­net.

Deep learn­ing can also help in trans­la­tion. In busi­nesses, it is be­ing used for mi­cro-seg­men­ta­tion and mi­cro-tar­get mar­ket­ing to learn more about the mi­croneeds of cus­tomers and their be­hav­ior, and trans­fer that knowl­edge to de­ci­sion-mak­ers so that they can shape bet­ter busi­ness strate­gies in­stead of mak­ing gut-based de­ci­sions. A re­tail chain, for ex­am­ple, has thou­sands of prod­ucts but no one can be an ex­pert on all. But they can leave the de­ci­sion-mak­ing to the al­go­rithms and ma­chines. I be­lieve that even­tu­ally, mar­ket­ing man­agers will be re­placed by com­puter al­go­rithms. Even jour­nal­ists will be. A US-based com­pany, Nar­ra­tive Sci­ence, writes more news ar­ti­cles than all jour­nal­ists in the coun­try. Many pro­fes­sions will be af­fected in the next ten years.

The in­ter­est­ing part is that deep learn­ing has al­ready made its im­pact on most peo­ple but they have not re­al­ized that it is im­pact­ing them.

can you dis­cuss the chal­lenges one might face while adopt­ing deep learn­ing tech­nolo­gies?

Al­most 98% of se­nior man­agers have never heard about deep learn­ing. A few IT and tech­nol­ogy man­agers have read about it in mag­a­zines or on Wikipedia, but are not ac­tively en­gaged in it. Be­cause right now, they are get­ting the first layer which is the big data layer set­tled. In the sec­ond layer, they might be do­ing some sim­plis­tic re­portwrit­ing but noth­ing be­yond that. The third layer only has a hand­ful of com­pa­nies in­volved in it. One of the bar­ri­ers is lack of knowl­edge, an­other is that even if they have heard it be­fore, the IT teams might not be well trained for it. This en­tails a com­bi­na­tion of math­e­mat­ics, com­puter pro­gram­ming, and in some cases, spe­cial skills based on the need of the project. For a sound project, we need peo­ple with knowl­edge of three dis­ci­plines— math­e­mat­ics, com­puter sci­ence, and acous­tics en­gi­neer­ing—and this makes it dif­fi­cult to find peo­ple qual­i­fied for it. Peo­ple feel its im­pact but they do not know the ori­gin of the im­pact. They will hear the weather re­port but will not know the tech­nol­ogy used to pro­duce it. They might see a video be­ing rec­om­mended on a web­site but are un­aware of the fact that deep learn­ing was used to come up with the rec­om­men­da­tion. The in­ter­est­ing part is that deep learn­ing has al­ready made its im­pact on most peo­ple but they have not re­al­ized that it is im­pact­ing them. ■

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.