IN­NO­VA­TION

How far are we from the launch of a func­tion­ing strat­egy ma­chine, asks Suvi Nenonen.

NZ Business - - CONTENTS -

Meet your next strate­gist: Ar­ti­fi­cial In­tel­li­gence. By Suvi Nenonen.

LEARN­ING MA­CHINES HAVE gained note­wor­thy vic­to­ries in the last two years. In 2015 IBM's Wat­son cor­rected an er­ro­neous can­cer di­ag­no­sis of a Ja­panese pa­tient, cross-ref­er­enc­ing the world's en­tire stock of on­col­ogy knowl­edge against the pa­tient's ge­netic data – in less than 10 min­utes.

In 2016, Google's Al­phaGo ar­ti­fi­cial in­tel­li­gence beat the reign­ing hu­man grand­mas­ter in the an­cient game of Go – of­ten re­ferred as the most strate­gic and dif­fi­cult game ever in­vented – us­ing moves that no­body had taught to the ma­chine.

Pro­cess­ing vast quan­ti­ties of un­struc­tured data to solve dif­fi­cult prob­lems? An­tic­i­pat­ing op­po­nent's re­ac­tions and com­ing up with cre­ative so­lu­tions? Sounds a lot like strat­egy work, which begs the ques­tion: how far are we from a func­tion­ing strat­egy ma­chine?

The cur­rent ar­ti­fi­cial in­tel­li­gence (AI) ap­pli­ca­tions are ex­am­ples of so-called ‘weak AI': solv­ing a nar­row, pre­de­fined prob­lem by analysing a pre­de­fined set of data. Even un­der these lim­i­ta­tions, weak AI can out­per­form hu­mans by do­ing what com­put­ers do best: mak­ing a vast num­ber of cal­cu­la­tions in a blink of an eye, free from cog­ni­tive bi­ases and with­out ever get­ting tired or jaded.

In a strat­egy con­text, there are a plen­i­tude of tasks fit for cur­rent AI ap­pli­ca­tions: recog­nis­ing pat­terns in cus­tomer or com­peti­tor be­hav­iour, pre­dict­ing fu­ture raw ma­te­rial prices, cal­cu­lat­ing the prob­a­bil­i­ties of var­i­ous fu­ture sce­nar­ios, and de­vel­op­ing bi­as­free im­ple­men­ta­tion plans – to name a few.

TRA­DI­TION­ALLY “HU­MAN” AS­PECTS OF STRAT­EGY CAN BE AU­TO­MATED

Not sur­pris­ingly, re­searchers and con­sul­tants are rec­om­mend­ing that such lower-or­der strate­gic tasks should be del­e­gated to learn­ing ma­chines as soon as they be­come more widely avail­able.

How­ever, the same ex­perts con­vey a re­as­sur­ing message for all strate­gists wor­ried for their liveli­hood: high­erorder strate­gic tasks, such as defin­ing or­gan­i­sa­tional ob­jec­tives, coach­ing peo­ple, or re­fram­ing prob­lems in a cre­ative man­ner, re­main firmly in hu­man hands, and in the fu­ture.

How­ever, this pro­posed work di­vi­sion be­tween hu­man strate­gists and their com­puter coun­ter­parts is likely to be­come ob­so­lete as soon as some­one de­vel­ops so-called ‘strong AI'. Such ad­vanced AI can think cre­atively and in ab­stract terms, and thus it can de­fine ques­tions wor­thy of an­swer­ing – and se­lect the most ap­pro­pri­ate in­for­ma­tion sources to go with each ques­tion.

This kind of super-in­tel­li­gence would make the do­main of hu­man strate­gists very small in­deed – but that would not nec­es­sar­ily be a bad thing. After all, hu­man strate­gists have some widely-ac­knowl­edged weak­nesses: M& A deals can de­stroy share­holder value, new prod­uct launches can fail, and most em­ploy­ees can­not even re­mem­ber their or­gan­i­sa­tions' strate­gies.

PROB­LEM­ATIC BUSI­NESS MODEL

How­ever, hu­man strate­gists may be needed for a longer time than tech­nol­ogy-op­ti­mists sug­gest – and this is also likely to ap­ply for the lower-or­der tasks that should be easy to au­to­mate.

What is lack­ing from the cur­rent “strat­egy ma­chine” dis­course is the re­al­i­sa­tion that cre­at­ing an AI to solve com­mer­cial prob­lems is some­what dif­fer­ent from har­ness­ing learn­ing com­put­ers to im­prove the med­i­cal treat­ment of pa­tients.

Heal­ing pa­tients as fast as pos­si­ble is in the vested in­ter­est of all stake­hold­ers, and one on­col­ogy AI could be enough for the en­tire world. How­ever, the sit­u­a­tion is markedly dif­fer­ent when think­ing about strat­egy ma­chines. Would you trust the ad­vice given by an AI if you knew that your com­peti­tor is also us­ing the same ma­chine?

This need for unique strate­gies and com­pet­i­tive ad­van­tage is likely to dis­cour­age IBM from teach­ing Wat­son the ba­sics of strat­egy – there just wouldn't be enough in­ter­ested cus­tomers for po­ten­tially “me-too” strate­gies.

Large con­sult­ing com­pa­nies are al­ready in­vest­ing heav­ily in their own soft­ware ca­pa­bil­i­ties, so most likely we will see them bring­ing forth the first strat­egy AIs.

How­ever, de­vel­op­ing sev­eral com­pet­ing strat­egy al­go­rithms in­stead of one will in­evitably spread the de­vel­op­ment re­sources more thinly – and thus slow progress down.

So, no need to worry about your strat­egy job in 2018. But it might make sense to keep a close eye on the AI de­vel­op­ment front, re­gard­less of how “non-tech” your sec­tor or back­ground are. As­so­ciate Professor Suvi Nenonen works at the Univer­sity of Auck­land Busi­ness School’s Grad­u­ate School of Man­age­ment and teaches in the MBA pro­grammes. Her re­search fo­cuses on busi­ness model in­no­va­tion and mar­ket in­no­va­tion.

Newspapers in English

Newspapers from New Zealand

© PressReader. All rights reserved.