From In­tu­ition to Al­go­rithm: How to Lever­age Ma­chine In­tel­li­gence

Rotman Management Magazine - - CONTENTS - By Howard Yu

In our march to­wards the age of ma­chine au­to­ma­tion, self-taught al­go­rithms will play an in­creas­ing role in or­ga­niz­ing our eco­nomic ac­tiv­i­ties.

IBM made a deep im­pres­sion on the Amer­i­can IN FE­BRU­ARY 2011, pub­lic when its su­per-com­puter Wat­son beat hu­man con­tes­tants in the pop­u­lar game show Jeop­ardy! About 15 mil­lion view­ers watched live as Wat­son tri­umphed over for­mer cham­pi­ons Ken Jen­nings and Brad Rut­ter. It was an episode that made clear in the pub­lic mind that ma­chine learn­ing could go be­yond the sin­gle-minded fo­cus of num­ber crunch­ing.

At the end of the two-day Jeop­ardy! tour­na­ment, Wat­son had amassed $77,147 in prize money — more than three times the amount its hu­man op­po­nents had ac­cu­mu­lated. Jen­nings, who had won more than 50 straight matches pre­vi­ously, came in sec­ond, just ahead of Rut­ter. “Just as fac­tory jobs were elim­i­nated in the 20th cen­tury by new as­sem­bly-line ro­bots, Brad and I were the first knowl­edge-in­dus­try work­ers put out of work by the new gen­er­a­tion of ‘think­ing machines’,” said Jen­nings at the time.

Wat­son rep­re­sented a ma­chine that no longer blindly fol­lowed in­struc­tions. The ma­chine could di­gest un­struc­tured data in the form of hu­man lan­guage and then make judg­ments on its own, which in turn has pro­foundly changed the way busi­nesses value man­age­rial ex­per­tise. One fi­nan­cial ser­vice ex­ec­u­tive put it suc­cinctly:

“Con­sider a hu­man who can read es­sen­tially an un­lim­ited num­ber of [fi­nan­cial] doc­u­ments and un­der­stand those doc­u­ments and com­pletely re­tain all the in­for­ma­tion. Now imag­ine you can ask that per­son a ques­tion: ‘Which com­pany is most likely to get ac­quired in the next three months?’ That’s es­sen­tially what [Wat­son] gives you.”

Wise Coun­sel in the Mak­ing

Ev­ery day, med­i­cal jour­nals pub­lish new treat­ments and dis­cov­er­ies. On av­er­age, the tor­rent of med­i­cal in­for­ma­tion dou­bles ev­ery five years. How­ever, given the work pres­sure in most hos­pi­tals, physi­cians rarely have enough time to read. It would take dozens of hours each week for a pri­mary care doc­tor to read up on ev­ery­thing to stay in­formed. Eighty-one per cent have re­ported that they could spend no more than five hours per month por­ing over jour­nals. Not sur­pris­ingly, only about 20 per cent of the knowl­edge that clin­i­cians use is ev­i­dence-based. The sheer

In our march to­wards the age of ma­chine au­to­ma­tion, self-taught al­go­rithms will play a far big­ger role in or­ga­niz­ing our eco­nomic ac­tiv­ity.

amount of new knowl­edge has over­whelmed the very lim­its of the hu­man brain and, thus, ren­dered ex­pert in­tu­ition — once pow­er­ful ma­chin­ery — pow­er­less.

David Kerr, di­rec­tor of cor­po­rate strat­egy at IBM, re­called how Pa­tri­cia Skarulis, the chief in­for­ma­tion of­fi­cer at Memo­rial Sloan Ket­ter­ing Cancer Cen­tre (MSK), reached out to him. “Shortly af­ter she watched the Wat­son com­puter de­feat two past grand-cham­pi­ons on Jeop­ardy!, she called to tell be that MSK had col­lected more than a decade’s worth of dig­i­tized in­for­ma­tion about cancer, in­clud­ing treat­ments and out­comes,” Kerr said in an in­ter­view. “She thought maybe Wat­son could help.”

Be­ing the world’s largest and old­est ded­i­cated cancer hos­pi­tal, MSK had main­tained a pro­pri­etary data­base that in­cluded 1.2 mil­lion in-pa­tient and out-pa­tient di­ag­noses and clin­i­cal treat­ment records from the pre­vi­ous 20-plus years. The vast data­base also con­tained the full molec­u­lar and ge­nomic analy­ses of all lung cancer pa­tients. But un­like a lab re­searcher, hos­pi­tal doc­tors rou­tinely make life-or-death de­ci­sions based on hunches. A doc­tor has no time to go home and think over the re­sults from all the med­i­cal tests given to a pa­tient; treat­ment needs to be de­cided on the spot. Un­less there is an in­tel­li­gent sys­tem to mine for in­sights and make them in­stan­ta­neously avail­able for doc­tors, the del­uge of in­for­ma­tion won’t im­prove their abil­ity to make the right call.

In March 2012, MSK and IBM Wat­son started work­ing to­gether with the in­ten­tion of cre­at­ing an ap­pli­ca­tion that would pro­vide rec­om­men­da­tions to on­col­o­gists who sim­ply de­scribed a pa­tient’s symp­toms in plain­spo­ken English. When an on­col­o­gist en­tered in­for­ma­tion, such as ‘my pa­tient has blood in his phlegm’, Wat­son would come back within half a minute with a drug reg­i­men to suit that in­di­vid­ual. “Wat­son is a tool that pro­cesses in­for­ma­tion, fills the gap of hu­man thoughts. [It] doesn’t make the de­ci­sion for you, that is the realm of the clin­i­cian, but it brings you the in­for­ma­tion that you would want any­way,” said Dr. Martin Kohn, chief med­i­cal sci­en­tist at IBM Re­search.

For Pa­tri­cia at MSK, the real aim was to build an in­tel­li­gence en­gine to pro­vide spe­cific di­ag­nos­tic test and treat­ment rec­om­men­da­tions. More than a search en­gine on steroids, it would trans­fer the wis­dom of ex­pe­ri­enced doc­tors to those with less ex­pe­ri­ence. A physi­cian at a re­mote med­i­cal cen­tre in China or In­dia, for in­stance, could have in­stant ac­cess to ev­ery­thing that the best cancer doc­tors had al­ready taught Wat­son. And if MSK’S ul­ti­mate mis­sion as a non-profit is to spread its in­flu­ence to de­liver cut­ting-edge health­care around the world, an ex­pert sys­tem like IBM Wat­son is the es­sen­tial car­rier.

In early 2017, a 327-bed hos­pi­tal in Jupiter, Florida, signed up for Wat­son Health with the pre­cise in­ten­tion of tak­ing ad­van­tage of the su­per­com­puter’s abil­ity to match cancer pa­tients with the treat­ments most likely to help them. Since a ma­chine never gets tired of read­ing, un­der­stand­ing and sum­ma­riz­ing, doc­tors can take ad­van­tage of all the knowl­edge that’s out there. Well­point has claimed that, ac­cord­ing to tests, Wat­son’s suc­cess­ful di­ag­no­sis rate for lung cancer is 90 per cent, com­pared to 50 per cent for hu­man doc­tors.

For most ex­ec­u­tives, th­ese tech­nolo­gies still feel for­eign. How can an ex­ist­ing busi­ness, es­pe­cially one in a non-it sec­tor, be­gin to lever­age the shift to­wards knowl­edge au­to­ma­tion? Among busi­ness school aca­demics, the ‘net­work ef­fect’ is a com­mon re­frain that ex­plains the rise of Uber, Airbnb and Alibaba. In each case, the com­pany took on the role of a ‘two-sided mar­ket­place’, fa­cil­i­tat­ing sell­ing on the sup­ply side and buy­ing on the de­mand side to en­able the ex­change of goods or ser­vices. The value of such a plat­form de­pends, in large part, on the num­ber of users on ei­ther side of the ex­change. That is, the more peo­ple that use the same plat­form, the more in­her­ently at­trac­tive the plat­form be­comes — lead­ing even more peo­ple to use it.

Con­sider for a mo­ment any dat­ing site or app (from Okcu­pid to Tin­der to Match.com). Men are drawn to­wards them be­cause they prom­ise a huge sup­ply of women and the high like­li­hood of a good match, and vice versa. Be­cause of this net­work ef­fect, users are will­ing to pay more for ac­cess to a big­ger net­work, and so, a com­pany’s prof­its im­prove when its user base grows. Scale begets scale. But be­yond that, prod­uct dif­fer­en­ti­a­tion re­mains elu­sive. Think Uber ver­sus Lyft, or imes­sage ver­sus What­sapp. Plat­forms of­ten look alike, and com­pe­ti­tion is re­duced to a game of ‘grow fast or die’.

This is why Face­book is so ob­sessed with growth. It is also why, when Snapchat went pub­lic in March 2017, the num­ber of daily ac­tive users be­came the sin­gle most im­por­tant met­ric for po­ten­tial in­vestors. The more peo­ple that hang out on Face­book or Snapchat — read­ing news and play­ing games — the more will­ing big brands, such as Coca-cola, Proc­ter & Gam­ble and Nike are to buy ads there. Only when a plat­form reaches a cer­tain size does its dom­i­nance then be­come hard to un­seat.

The more peo­ple that use the same plat­form, the more in­her­ently at­trac­tive the plat­form be­comes.

The Sec­ond Ma­chine Age

In my ex­ec­u­tive classes, man­agers of­ten ex­press a grave con­cern about how fast ar­ti­fi­cial in­tel­li­gence is un­fold­ing — so fast that they be­come afraid of com­mit­ting to any one sup­plier or stan­dard, since there might be a bet­ter so­lu­tion to­mor­row. But pre­cisely be­cause we are liv­ing in a world of ac­cel­er­ated change, as far as ma­chine in­tel­li­gence is con­cerned, it is crit­i­cal to stay in the know.

One rad­i­cal im­prove­ment in re­cent years is how machines learn. Back when Wat­son was trained to serve as a bionic on­col­o­gist, it was nec­es­sary to in­gest some 600,000 pieces of med­i­cal ev­i­dence and two mil­lion pages of text from 42 med­i­cal jour­nals, 25,000 test-case sce­nar­ios and 1,500 real-life cases, so that Wat­son would know how to ex­tract and in­ter­pret physi­cians’ notes, lab re­sults and clin­i­cal re­search. Con­duct­ing this case­based train­ing for a brainy ma­chine can be thor­oughly ex­haust­ing and time-con­sum­ing.

At MSK, a ded­i­cated team spent more than a year de­vel­op­ing train­ing ma­te­ri­als for Wat­son, and a large part of this so-called train­ing came down to the daily labou­ri­ous grind: Data clean­ing, pro­gram fine-tun­ing and re­sult val­i­da­tion — tasks that are some­times ex­cru­ci­at­ing, of­ten bor­ing and al­to­gether mun­dane.

“If you’re teach­ing a self-driv­ing car, any­one can la­bel a tree or a sign so the sys­tem can learn to rec­og­nize it,” ex­plained Thomas Fuchs, a com­pu­ta­tional pathol­o­gist at MSK. “But in a spe­cial­ized do­main within medicine, you need ex­perts trained for decades to prop­erly la­bel the in­for­ma­tion you feed to the com­puter.” Wouldn’t it be nice if machines could teach them­selves? Could ma­chine learn­ing be­come an un­su­per­vised ac­tiv­ity?

Google’s Al­phago demon­strates that an un­su­per­vised process is in­deed pos­si­ble. Be­fore Al­phago played the board game Go against hu­mans, Google re­searchers had been de­vel­op­ing it to play video games — Space In­vaders, Break­out, Pong and oth­ers. With­out the need for any spe­cific pro­gram­ming, the gen­eral-pur­pose al­go­rithm was able to mas­ter each game by trial and er­ror — press­ing dif­fer­ent but­tons ran­domly at first and then ad­just­ing to max­i­mize re­wards. Game af­ter game, the soft­ware proved to be cun­ningly ver­sa­tile in fig­ur­ing out an ap­pro­pri­ate strat­egy and then ap­ply­ing it with­out mak­ing any mis­takes. Al­phago thus rep­re­sents not just a ma­chine that can think — as Wat­son does — but also one that learns and strate­gizes, all with­out di­rect su­per­vi­sion from any hu­man.

This gen­eral-pur­pose pro­gram­ming is made pos­si­ble thanks to a ‘deep neu­ral net­work’ — a net­work of hard­ware and soft­ware that mim­ics the web of neu­rons in the hu­man brain. ‘Re­in­force­ment learn­ing’ in hu­mans oc­curs when pos­i­tive feed­back trig­gers the pro­duc­tion of the neu­ro­trans­mit­ter dopamine as a re­ward sig­nal for the brain, re­sult­ing in feel­ings of grat­i­fi­ca­tion and plea­sure. Com­put­ers can be pro­grammed to work sim­i­larly. The pos­i­tive re­wards come in the form of scores when the al­go­rithm achieves a de­sired out­come. Un­der this gen­eral frame­work, Al­phago writes its own in­struc­tions ran­domly through many gen­er­a­tions of trial and er­ror, re­plac­ing lower-scor­ing strate­gies with higher-scor­ing ones. That’s how an al­go­rithm teaches it­self to do any­thing, not just play Go.

This con­cep­tual de­sign is not new; com­puter sci­en­tists have dis­cussed re­in­force­ment learn­ing for more than 20 years. But only with rapid ad­vance­ment and abun­dance in com­put­ing power could deep learn­ing be­come prac­ti­cal. By for­go­ing soft­ware cod­ing with di­rect rules and com­mands, re­in­force­ment learn­ing has made au­ton­o­mous machines a re­al­ity.

Most re­mark­able about Al­phago is that the al­go­rithm con­tin­u­ally im­proves its per­for­mance by play­ing mil­lions of games against a tweaked ver­sion of it­self. A hu­man cre­ator is no longer needed, nor able to tell how the al­go­rithm chooses to achieve a stated goal: We can see the data go in and the ac­tions come out, but we can’t grasp what hap­pens in be­tween. Sim­ply put, a hu­man pro­gram­mer can’t ex­plain a ma­chine’s be­hav­iour by read­ing the soft­ware code any more than a neu­ro­sci­en­tist can ex­plain your hot dog crav­ing by star­ing at an MRI scan of your brain. What we have cre­ated is a black box, all-know­ing but im­pen­e­tra­ble.

Elon Musk, founder of Tesla, once posted a stir­ring com­ment on so­cial me­dia, say­ing that AI could be “po­ten­tially more dan­ger­ous than nukes” and liken­ing it to “sum­mon­ing the de­mon.” Musk’s con­vic­tion has prompted him to do­nate mil­lions to the ethics think tank Ope­nai — and he’s urg­ing other bil­lion­aire techies, in­clud­ing Face­book’s Mark Zucker­berg and Google’s Larry Page to pro­ceed with cau­tion in their myr­iad ma­chine­learn­ing ex­per­i­ments. Ap­ple co-founder Steve Woz­niak has equally ex­pressed grave con­cerns: “The fu­ture is scary and very bad for peo­ple,” he ar­gued. “Will we be the gods? Will we be the fam­ily pets? Or will we be ants that get stepped on?”

While such dis­con­so­late fore­casts may be ex­ag­ger­ated, few can deny that, in our cease­less march to­wards the age of ma­chine

au­to­ma­tion, self-taught al­go­rithms will play a far big­ger role in or­ga­niz­ing our eco­nomic ac­tiv­i­ties. What will hap­pen when the ubiq­ui­tous con­nec­tiv­ity of sen­sors and mo­bile de­vices con­verges with such AI as Al­phago or IBM Wat­son? Could a bevy of gen­eral-pur­pose, self-taught al­go­rithms gov­ern the world’s eco­nomic trans­ac­tions?

“The in­cred­i­ble thing that’s go­ing to hap­pen next is the abil­ity for ar­ti­fi­cial in­tel­li­gence to write ar­ti­fi­cial in­tel­li­gence by it­self,” said Jen-sen Huang, co-founder and CEO of Nvidia, whose graphic pro­cess­ing units (GPUS) crunch com­plex cal­cu­la­tions nec­es­sary for deep learn­ing. It has been the speedy num­ber crunch­ing that has en­abled com­put­ers to see, hear, un­der­stand and learn. “In the fu­ture, com­pa­nies will have an AI that is watch­ing ev­ery sin­gle trans­ac­tion and busi­ness process, all day long,” Huang as­serted. “As a re­sult of this ob­ser­va­tion, the ar­ti­fi­cial in­tel­li­gence soft­ware will write an ar­ti­fi­cial in­tel­li­gence soft­ware to au­to­mate that busi­ness process. We won’t be able to do it — it’s too com­pli­cated.”

That fu­ture isn’t that far into the fu­ture. For years, GE has been work­ing on an­a­lyt­ics to im­prove the pro­duc­tiv­ity of its jet en­gines, wind tur­bines and lo­co­mo­tives, lev­er­ag­ing the con­tin­u­ous stream of data it col­lects in the field. Else­where, Cisco has set out the am­bi­tion of trans­fer­ring data of all kinds into the cloud in what it calls the In­ter­net of Ev­ery­thing; and tech giants in­clud­ing Mi­cro­soft, Google, IBM and Ama­zon are mak­ing their in­ter­nally de­vel­oped ma­chine learn­ing tech­nolo­gies freely avail­able to client com­pa­nies via ap­pli­ca­tion pro­gram­ming in­ter­faces (APIS). Th­ese ma­chine in­tel­li­gences — pre­vi­ously cost­ing mil­lions if not tens of mil­lions to de­velop — es­sen­tially have now be­come re­us­able by third par­ties at neg­li­gi­ble cost, which will only spur in­dus­try adop­tion at a wider scale.

In clos­ing

With un­su­per­vised al­go­rithms qui­etly per­form­ing the in­stan­ta­neous ad­just­ment, au­to­matic op­ti­miza­tion and con­tin­u­ous im­prove­ment of ever-more com­plex sys­tems, trans­ac­tion costs be­tween or­ga­ni­za­tions are poised to drop dra­mat­i­cally, if not dis­ap­pear en­tirely. For this rea­son, re­dun­dancy in pro­duc­tion fa­cil­i­ties should be rad­i­cally re­duced and the enor­mous waste that is so preva­lent in the global sup­ply chain to­day should van­ish.

Once the co­or­di­na­tion of busi­ness trans­ac­tions within and out­side an or­ga­ni­za­tion speed up, from sales to en­gi­neer­ing, from lo­gis­tics to busi­ness op­er­a­tions, from fi­nance to cus­tomer ser­vice, fric­tion be­tween com­pa­nies will drop and, con­se­quently, broader mar­ket col­lab­o­ra­tion can then be re­al­ized. In an econ­omy where trans­ac­tion cost ap­proaches zero, tra­di­tional propo­si­tions such as ‘one-stop shop’ or ‘sup­ply chain op­ti­miza­tion’ will no longer be dif­fer­en­ti­at­ing. Th­ese propo­si­tions will be­come com­mon­place, achiev­able by even the small­est play­ers or new en­trants in all in­dus­tries.

This is akin to the cheap and pow­er­ful cloud com­put­ing upon which Net­flix, Airbnb and Yelp de­pend. Un­til very re­cently, any In­ter­net busi­nesses needed to own and build ex­pen­sive servers and re­source-in­ten­sive data cen­ters. But with Ama­zon Web Ser­vices (AWS) or Mi­cro­soft Azure, a start-up can store all of its on­line in­fra­struc­ture in the cloud; it can also rent fea­tures and tools that are in the cloud, es­sen­tially out­sourc­ing all of its com­put­ing chores to oth­ers. No need to fore­cast de­mand or plan ca­pac­ity — sim­ply buy ad­di­tional ser­vices as re­quire­ment goes up. The en­gi­neer­ing team of a start-up is there­fore freed up to fo­cus on solv­ing prob­lems that are unique to its core busi­ness.

Sim­i­larly, when fewer re­sources are re­quired for or­ga­ni­za­tional co­or­di­na­tion, be­ing big can only slow things down. No longer will it be cred­i­ble for big com­pa­nies to claim con­ven­tional ad­van­tages by virtue of their be­ing ‘ver­ti­cally in­te­grated’ (an ar­range­ment in which the com­pa­nies own and con­trol their sup­ply chains). In­stead, they will be un­der tremen­dous pres­sure to match smaller play­ers that are able to spe­cial­ize in best-in-class ser­vices and de­liver cus­tom­ized so­lu­tions in real time as or­ders are made. In other words, in the sec­ond ma­chine age, big com­pa­nies need to act small. Howard Yu is the LEGO Pro­fes­sor of Man­age­ment and In­no­va­tion at IMD busi­ness school in Switzer­land and the au­thor of LEAP: How to Thrive in a World Where Ev­ery­thing Can Be Copied (Publi­caf­fairs, 2018). He ap­peared on the Thinkers50 Radar list of 30 man­age­ment thinkers ‘most likely to shape the fu­ture of how or­ga­ni­za­tions are man­aged and led.’

This ar­ti­cle has been ex­cerpted from Leap: How to Thrive in a World Where

Ev­ery­thing Can Be Copied by Howard Yu. Copy­right © 2018. Avail­able from Publi­caf­fairs, an im­print of Perseus Books, LLC, a sub­sidiary of Ha­chette Book Group, Inc.

Trans­ac­tion costs be­tween or­ga­ni­za­tions are poised to drop dra­mat­i­cally, if not dis­ap­pear en­tirely.

Newspapers in English

Newspapers from Canada

© PressReader. All rights reserved.