Rotman Management Magazine

From Intuition to Algorithm: How to Leverage Machine Intelligen­ce

- By Howard Yu

In our march towards the age of machine automation, self-taught algorithms will play an increasing role in organizing our economic activities.

IBM made a deep impression on the American IN FEBRUARY 2011, public when its super-computer Watson beat human contestant­s in the popular game show Jeopardy! About 15 million viewers watched live as Watson triumphed over former champions Ken Jennings and Brad Rutter. It was an episode that made clear in the public mind that machine learning could go beyond the single-minded focus of number crunching.

At the end of the two-day Jeopardy! tournament, Watson had amassed $77,147 in prize money — more than three times the amount its human opponents had accumulate­d. Jennings, who had won more than 50 straight matches previously, came in second, just ahead of Rutter. “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking machines’,” said Jennings at the time.

Watson represente­d a machine that no longer blindly followed instructio­ns. The machine could digest unstructur­ed data in the form of human language and then make judgments on its own, which in turn has profoundly changed the way businesses value managerial expertise. One financial service executive put it succinctly:

“Consider a human who can read essentiall­y an unlimited number of [financial] documents and understand those documents and completely retain all the informatio­n. Now imagine you can ask that person a question: ‘Which company is most likely to get acquired in the next three months?’ That’s essentiall­y what [Watson] gives you.”

Wise Counsel in the Making

Every day, medical journals publish new treatments and discoverie­s. On average, the torrent of medical informatio­n doubles every five years. However, given the work pressure in most hospitals, physicians rarely have enough time to read. It would take dozens of hours each week for a primary care doctor to read up on everything to stay informed. Eighty-one per cent have reported that they could spend no more than five hours per month poring over journals. Not surprising­ly, only about 20 per cent of the knowledge that clinicians use is evidence-based. The sheer

In our march towards the age of machine automation, self-taught algorithms will play a far bigger role in organizing our economic activity.

amount of new knowledge has overwhelme­d the very limits of the human brain and, thus, rendered expert intuition — once powerful machinery — powerless.

David Kerr, director of corporate strategy at IBM, recalled how Patricia Skarulis, the chief informatio­n officer at Memorial Sloan Kettering Cancer Centre (MSK), reached out to him. “Shortly after she watched the Watson computer defeat two past grand-champions on Jeopardy!, she called to tell be that MSK had collected more than a decade’s worth of digitized informatio­n about cancer, including treatments and outcomes,” Kerr said in an interview. “She thought maybe Watson could help.”

Being the world’s largest and oldest dedicated cancer hospital, MSK had maintained a proprietar­y database that included 1.2 million in-patient and out-patient diagnoses and clinical treatment records from the previous 20-plus years. The vast database also contained the full molecular and genomic analyses of all lung cancer patients. But unlike a lab researcher, hospital doctors routinely make life-or-death decisions based on hunches. A doctor has no time to go home and think over the results from all the medical tests given to a patient; treatment needs to be decided on the spot. Unless there is an intelligen­t system to mine for insights and make them instantane­ously available for doctors, the deluge of informatio­n won’t improve their ability to make the right call.

In March 2012, MSK and IBM Watson started working together with the intention of creating an applicatio­n that would provide recommenda­tions to oncologist­s who simply described a patient’s symptoms in plainspoke­n English. When an oncologist entered informatio­n, such as ‘my patient has blood in his phlegm’, Watson would come back within half a minute with a drug regimen to suit that individual. “Watson is a tool that processes informatio­n, fills the gap of human thoughts. [It] doesn’t make the decision for you, that is the realm of the clinician, but it brings you the informatio­n that you would want anyway,” said Dr. Martin Kohn, chief medical scientist at IBM Research.

For Patricia at MSK, the real aim was to build an intelligen­ce engine to provide specific diagnostic test and treatment recommenda­tions. More than a search engine on steroids, it would transfer the wisdom of experience­d doctors to those with less experience. A physician at a remote medical centre in China or India, for instance, could have instant access to everything that the best cancer doctors had already taught Watson. And if MSK’S ultimate mission as a non-profit is to spread its influence to deliver cutting-edge healthcare around the world, an expert system like IBM Watson is the essential carrier.

In early 2017, a 327-bed hospital in Jupiter, Florida, signed up for Watson Health with the precise intention of taking advantage of the supercompu­ter’s ability to match cancer patients with the treatments most likely to help them. Since a machine never gets tired of reading, understand­ing and summarizin­g, doctors can take advantage of all the knowledge that’s out there. Wellpoint has claimed that, according to tests, Watson’s successful diagnosis rate for lung cancer is 90 per cent, compared to 50 per cent for human doctors.

For most executives, these technologi­es still feel foreign. How can an existing business, especially one in a non-it sector, begin to leverage the shift towards knowledge automation? Among business school academics, the ‘network effect’ is a common refrain that explains the rise of Uber, Airbnb and Alibaba. In each case, the company took on the role of a ‘two-sided marketplac­e’, facilitati­ng selling on the supply side and buying on the demand side to enable the exchange of goods or services. The value of such a platform depends, in large part, on the number of users on either side of the exchange. That is, the more people that use the same platform, the more inherently attractive the platform becomes — leading even more people to use it.

Consider for a moment any dating site or app (from Okcupid to Tinder to Match.com). Men are drawn towards them because they promise a huge supply of women and the high likelihood of a good match, and vice versa. Because of this network effect, users are willing to pay more for access to a bigger network, and so, a company’s profits improve when its user base grows. Scale begets scale. But beyond that, product differenti­ation remains elusive. Think Uber versus Lyft, or imessage versus Whatsapp. Platforms often look alike, and competitio­n is reduced to a game of ‘grow fast or die’.

This is why Facebook is so obsessed with growth. It is also why, when Snapchat went public in March 2017, the number of daily active users became the single most important metric for potential investors. The more people that hang out on Facebook or Snapchat — reading news and playing games — the more willing big brands, such as Coca-cola, Procter & Gamble and Nike are to buy ads there. Only when a platform reaches a certain size does its dominance then become hard to unseat.

The more people that use the same platform, the more inherently attractive the platform becomes.

The Second Machine Age

In my executive classes, managers often express a grave concern about how fast artificial intelligen­ce is unfolding — so fast that they become afraid of committing to any one supplier or standard, since there might be a better solution tomorrow. But precisely because we are living in a world of accelerate­d change, as far as machine intelligen­ce is concerned, it is critical to stay in the know.

One radical improvemen­t in recent years is how machines learn. Back when Watson was trained to serve as a bionic oncologist, it was necessary to ingest some 600,000 pieces of medical evidence and two million pages of text from 42 medical journals, 25,000 test-case scenarios and 1,500 real-life cases, so that Watson would know how to extract and interpret physicians’ notes, lab results and clinical research. Conducting this casebased training for a brainy machine can be thoroughly exhausting and time-consuming.

At MSK, a dedicated team spent more than a year developing training materials for Watson, and a large part of this so-called training came down to the daily labourious grind: Data cleaning, program fine-tuning and result validation — tasks that are sometimes excruciati­ng, often boring and altogether mundane.

“If you’re teaching a self-driving car, anyone can label a tree or a sign so the system can learn to recognize it,” explained Thomas Fuchs, a computatio­nal pathologis­t at MSK. “But in a specialize­d domain within medicine, you need experts trained for decades to properly label the informatio­n you feed to the computer.” Wouldn’t it be nice if machines could teach themselves? Could machine learning become an unsupervis­ed activity?

Google’s Alphago demonstrat­es that an unsupervis­ed process is indeed possible. Before Alphago played the board game Go against humans, Google researcher­s had been developing it to play video games — Space Invaders, Breakout, Pong and others. Without the need for any specific programmin­g, the general-purpose algorithm was able to master each game by trial and error — pressing different buttons randomly at first and then adjusting to maximize rewards. Game after game, the software proved to be cunningly versatile in figuring out an appropriat­e strategy and then applying it without making any mistakes. Alphago thus represents not just a machine that can think — as Watson does — but also one that learns and strategize­s, all without direct supervisio­n from any human.

This general-purpose programmin­g is made possible thanks to a ‘deep neural network’ — a network of hardware and software that mimics the web of neurons in the human brain. ‘Reinforcem­ent learning’ in humans occurs when positive feedback triggers the production of the neurotrans­mitter dopamine as a reward signal for the brain, resulting in feelings of gratificat­ion and pleasure. Computers can be programmed to work similarly. The positive rewards come in the form of scores when the algorithm achieves a desired outcome. Under this general framework, Alphago writes its own instructio­ns randomly through many generation­s of trial and error, replacing lower-scoring strategies with higher-scoring ones. That’s how an algorithm teaches itself to do anything, not just play Go.

This conceptual design is not new; computer scientists have discussed reinforcem­ent learning for more than 20 years. But only with rapid advancemen­t and abundance in computing power could deep learning become practical. By forgoing software coding with direct rules and commands, reinforcem­ent learning has made autonomous machines a reality.

Most remarkable about Alphago is that the algorithm continuall­y improves its performanc­e by playing millions of games against a tweaked version of itself. A human creator is no longer needed, nor able to tell how the algorithm chooses to achieve a stated goal: We can see the data go in and the actions come out, but we can’t grasp what happens in between. Simply put, a human programmer can’t explain a machine’s behaviour by reading the software code any more than a neuroscien­tist can explain your hot dog craving by staring at an MRI scan of your brain. What we have created is a black box, all-knowing but impenetrab­le.

Elon Musk, founder of Tesla, once posted a stirring comment on social media, saying that AI could be “potentiall­y more dangerous than nukes” and likening it to “summoning the demon.” Musk’s conviction has prompted him to donate millions to the ethics think tank Openai — and he’s urging other billionair­e techies, including Facebook’s Mark Zuckerberg and Google’s Larry Page to proceed with caution in their myriad machinelea­rning experiment­s. Apple co-founder Steve Wozniak has equally expressed grave concerns: “The future is scary and very bad for people,” he argued. “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?”

While such disconsola­te forecasts may be exaggerate­d, few can deny that, in our ceaseless march towards the age of machine

automation, self-taught algorithms will play a far bigger role in organizing our economic activities. What will happen when the ubiquitous connectivi­ty of sensors and mobile devices converges with such AI as Alphago or IBM Watson? Could a bevy of general-purpose, self-taught algorithms govern the world’s economic transactio­ns?

“The incredible thing that’s going to happen next is the ability for artificial intelligen­ce to write artificial intelligen­ce by itself,” said Jen-sen Huang, co-founder and CEO of Nvidia, whose graphic processing units (GPUS) crunch complex calculatio­ns necessary for deep learning. It has been the speedy number crunching that has enabled computers to see, hear, understand and learn. “In the future, companies will have an AI that is watching every single transactio­n and business process, all day long,” Huang asserted. “As a result of this observatio­n, the artificial intelligen­ce software will write an artificial intelligen­ce software to automate that business process. We won’t be able to do it — it’s too complicate­d.”

That future isn’t that far into the future. For years, GE has been working on analytics to improve the productivi­ty of its jet engines, wind turbines and locomotive­s, leveraging the continuous stream of data it collects in the field. Elsewhere, Cisco has set out the ambition of transferri­ng data of all kinds into the cloud in what it calls the Internet of Everything; and tech giants including Microsoft, Google, IBM and Amazon are making their internally developed machine learning technologi­es freely available to client companies via applicatio­n programmin­g interfaces (APIS). These machine intelligen­ces — previously costing millions if not tens of millions to develop — essentiall­y have now become reusable by third parties at negligible cost, which will only spur industry adoption at a wider scale.

In closing

With unsupervis­ed algorithms quietly performing the instantane­ous adjustment, automatic optimizati­on and continuous improvemen­t of ever-more complex systems, transactio­n costs between organizati­ons are poised to drop dramatical­ly, if not disappear entirely. For this reason, redundancy in production facilities should be radically reduced and the enormous waste that is so prevalent in the global supply chain today should vanish.

Once the coordinati­on of business transactio­ns within and outside an organizati­on speed up, from sales to engineerin­g, from logistics to business operations, from finance to customer service, friction between companies will drop and, consequent­ly, broader market collaborat­ion can then be realized. In an economy where transactio­n cost approaches zero, traditiona­l propositio­ns such as ‘one-stop shop’ or ‘supply chain optimizati­on’ will no longer be differenti­ating. These propositio­ns will become commonplac­e, achievable by even the smallest players or new entrants in all industries.

This is akin to the cheap and powerful cloud computing upon which Netflix, Airbnb and Yelp depend. Until very recently, any Internet businesses needed to own and build expensive servers and resource-intensive data centers. But with Amazon Web Services (AWS) or Microsoft Azure, a start-up can store all of its online infrastruc­ture in the cloud; it can also rent features and tools that are in the cloud, essentiall­y outsourcin­g all of its computing chores to others. No need to forecast demand or plan capacity — simply buy additional services as requiremen­t goes up. The engineerin­g team of a start-up is therefore freed up to focus on solving problems that are unique to its core business.

Similarly, when fewer resources are required for organizati­onal coordinati­on, being big can only slow things down. No longer will it be credible for big companies to claim convention­al advantages by virtue of their being ‘vertically integrated’ (an arrangemen­t in which the companies own and control their supply chains). Instead, they will be under tremendous pressure to match smaller players that are able to specialize in best-in-class services and deliver customized solutions in real time as orders are made. In other words, in the second machine age, big companies need to act small. Howard Yu is the LEGO Professor of Management and Innovation at IMD business school in Switzerlan­d and the author of LEAP: How to Thrive in a World Where Everything Can Be Copied (Publicaffa­irs, 2018). He appeared on the Thinkers50 Radar list of 30 management thinkers ‘most likely to shape the future of how organizati­ons are managed and led.’

This article has been excerpted from Leap: How to Thrive in a World Where

Everything Can Be Copied by Howard Yu. Copyright © 2018. Available from Publicaffa­irs, an imprint of Perseus Books, LLC, a subsidiary of Hachette Book Group, Inc.

Transactio­n costs between organizati­ons are poised to drop dramatical­ly, if not disappear entirely.

 ??  ??
 ??  ??

Newspapers in English

Newspapers from Canada