From Intuition to Algorithm: How to Leverage Machine Intelligence
In our march towards the age of machine automation, self-taught algorithms will play an increasing role in organizing our economic activities.
IBM made a deep impression on the American IN FEBRUARY 2011, public when its super-computer Watson beat human contestants in the popular game show Jeopardy! About 15 million viewers watched live as Watson triumphed over former champions Ken Jennings and Brad Rutter. It was an episode that made clear in the public mind that machine learning could go beyond the single-minded focus of number crunching.
At the end of the two-day Jeopardy! tournament, Watson had amassed $77,147 in prize money — more than three times the amount its human opponents had accumulated. Jennings, who had won more than 50 straight matches previously, came in second, just ahead of Rutter. “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking machines’,” said Jennings at the time.
Watson represented a machine that no longer blindly followed instructions. The machine could digest unstructured data in the form of human language and then make judgments on its own, which in turn has profoundly changed the way businesses value managerial expertise. One financial service executive put it succinctly:
“Consider a human who can read essentially an unlimited number of [financial] documents and understand those documents and completely retain all the information. Now imagine you can ask that person a question: ‘Which company is most likely to get acquired in the next three months?’ That’s essentially what [Watson] gives you.”
Wise Counsel in the Making
Every day, medical journals publish new treatments and discoveries. On average, the torrent of medical information doubles every five years. However, given the work pressure in most hospitals, physicians rarely have enough time to read. It would take dozens of hours each week for a primary care doctor to read up on everything to stay informed. Eighty-one per cent have reported that they could spend no more than five hours per month poring over journals. Not surprisingly, only about 20 per cent of the knowledge that clinicians use is evidence-based. The sheer
In our march towards the age of machine automation, self-taught algorithms will play a far bigger role in organizing our economic activity.
amount of new knowledge has overwhelmed the very limits of the human brain and, thus, rendered expert intuition — once powerful machinery — powerless.
David Kerr, director of corporate strategy at IBM, recalled how Patricia Skarulis, the chief information officer at Memorial Sloan Kettering Cancer Centre (MSK), reached out to him. “Shortly after she watched the Watson computer defeat two past grand-champions on Jeopardy!, she called to tell be that MSK had collected more than a decade’s worth of digitized information about cancer, including treatments and outcomes,” Kerr said in an interview. “She thought maybe Watson could help.”
Being the world’s largest and oldest dedicated cancer hospital, MSK had maintained a proprietary database that included 1.2 million in-patient and out-patient diagnoses and clinical treatment records from the previous 20-plus years. The vast database also contained the full molecular and genomic analyses of all lung cancer patients. But unlike a lab researcher, hospital doctors routinely make life-or-death decisions based on hunches. A doctor has no time to go home and think over the results from all the medical tests given to a patient; treatment needs to be decided on the spot. Unless there is an intelligent system to mine for insights and make them instantaneously available for doctors, the deluge of information won’t improve their ability to make the right call.
In March 2012, MSK and IBM Watson started working together with the intention of creating an application that would provide recommendations to oncologists who simply described a patient’s symptoms in plainspoken English. When an oncologist entered information, such as ‘my patient has blood in his phlegm’, Watson would come back within half a minute with a drug regimen to suit that individual. “Watson is a tool that processes information, fills the gap of human thoughts. [It] doesn’t make the decision for you, that is the realm of the clinician, but it brings you the information that you would want anyway,” said Dr. Martin Kohn, chief medical scientist at IBM Research.
For Patricia at MSK, the real aim was to build an intelligence engine to provide specific diagnostic test and treatment recommendations. More than a search engine on steroids, it would transfer the wisdom of experienced doctors to those with less experience. A physician at a remote medical centre in China or India, for instance, could have instant access to everything that the best cancer doctors had already taught Watson. And if MSK’S ultimate mission as a non-profit is to spread its influence to deliver cutting-edge healthcare around the world, an expert system like IBM Watson is the essential carrier.
In early 2017, a 327-bed hospital in Jupiter, Florida, signed up for Watson Health with the precise intention of taking advantage of the supercomputer’s ability to match cancer patients with the treatments most likely to help them. Since a machine never gets tired of reading, understanding and summarizing, doctors can take advantage of all the knowledge that’s out there. Wellpoint has claimed that, according to tests, Watson’s successful diagnosis rate for lung cancer is 90 per cent, compared to 50 per cent for human doctors.
For most executives, these technologies still feel foreign. How can an existing business, especially one in a non-it sector, begin to leverage the shift towards knowledge automation? Among business school academics, the ‘network effect’ is a common refrain that explains the rise of Uber, Airbnb and Alibaba. In each case, the company took on the role of a ‘two-sided marketplace’, facilitating selling on the supply side and buying on the demand side to enable the exchange of goods or services. The value of such a platform depends, in large part, on the number of users on either side of the exchange. That is, the more people that use the same platform, the more inherently attractive the platform becomes — leading even more people to use it.
Consider for a moment any dating site or app (from Okcupid to Tinder to Match.com). Men are drawn towards them because they promise a huge supply of women and the high likelihood of a good match, and vice versa. Because of this network effect, users are willing to pay more for access to a bigger network, and so, a company’s profits improve when its user base grows. Scale begets scale. But beyond that, product differentiation remains elusive. Think Uber versus Lyft, or imessage versus Whatsapp. Platforms often look alike, and competition is reduced to a game of ‘grow fast or die’.
This is why Facebook is so obsessed with growth. It is also why, when Snapchat went public in March 2017, the number of daily active users became the single most important metric for potential investors. The more people that hang out on Facebook or Snapchat — reading news and playing games — the more willing big brands, such as Coca-cola, Procter & Gamble and Nike are to buy ads there. Only when a platform reaches a certain size does its dominance then become hard to unseat.
The more people that use the same platform, the more inherently attractive the platform becomes.
The Second Machine Age
In my executive classes, managers often express a grave concern about how fast artificial intelligence is unfolding — so fast that they become afraid of committing to any one supplier or standard, since there might be a better solution tomorrow. But precisely because we are living in a world of accelerated change, as far as machine intelligence is concerned, it is critical to stay in the know.
One radical improvement in recent years is how machines learn. Back when Watson was trained to serve as a bionic oncologist, it was necessary to ingest some 600,000 pieces of medical evidence and two million pages of text from 42 medical journals, 25,000 test-case scenarios and 1,500 real-life cases, so that Watson would know how to extract and interpret physicians’ notes, lab results and clinical research. Conducting this casebased training for a brainy machine can be thoroughly exhausting and time-consuming.
At MSK, a dedicated team spent more than a year developing training materials for Watson, and a large part of this so-called training came down to the daily labourious grind: Data cleaning, program fine-tuning and result validation — tasks that are sometimes excruciating, often boring and altogether mundane.
“If you’re teaching a self-driving car, anyone can label a tree or a sign so the system can learn to recognize it,” explained Thomas Fuchs, a computational pathologist at MSK. “But in a specialized domain within medicine, you need experts trained for decades to properly label the information you feed to the computer.” Wouldn’t it be nice if machines could teach themselves? Could machine learning become an unsupervised activity?
Google’s Alphago demonstrates that an unsupervised process is indeed possible. Before Alphago played the board game Go against humans, Google researchers had been developing it to play video games — Space Invaders, Breakout, Pong and others. Without the need for any specific programming, the general-purpose algorithm was able to master each game by trial and error — pressing different buttons randomly at first and then adjusting to maximize rewards. Game after game, the software proved to be cunningly versatile in figuring out an appropriate strategy and then applying it without making any mistakes. Alphago thus represents not just a machine that can think — as Watson does — but also one that learns and strategizes, all without direct supervision from any human.
This general-purpose programming is made possible thanks to a ‘deep neural network’ — a network of hardware and software that mimics the web of neurons in the human brain. ‘Reinforcement learning’ in humans occurs when positive feedback triggers the production of the neurotransmitter dopamine as a reward signal for the brain, resulting in feelings of gratification and pleasure. Computers can be programmed to work similarly. The positive rewards come in the form of scores when the algorithm achieves a desired outcome. Under this general framework, Alphago writes its own instructions randomly through many generations of trial and error, replacing lower-scoring strategies with higher-scoring ones. That’s how an algorithm teaches itself to do anything, not just play Go.
This conceptual design is not new; computer scientists have discussed reinforcement learning for more than 20 years. But only with rapid advancement and abundance in computing power could deep learning become practical. By forgoing software coding with direct rules and commands, reinforcement learning has made autonomous machines a reality.
Most remarkable about Alphago is that the algorithm continually improves its performance by playing millions of games against a tweaked version of itself. A human creator is no longer needed, nor able to tell how the algorithm chooses to achieve a stated goal: We can see the data go in and the actions come out, but we can’t grasp what happens in between. Simply put, a human programmer can’t explain a machine’s behaviour by reading the software code any more than a neuroscientist can explain your hot dog craving by staring at an MRI scan of your brain. What we have created is a black box, all-knowing but impenetrable.
Elon Musk, founder of Tesla, once posted a stirring comment on social media, saying that AI could be “potentially more dangerous than nukes” and likening it to “summoning the demon.” Musk’s conviction has prompted him to donate millions to the ethics think tank Openai — and he’s urging other billionaire techies, including Facebook’s Mark Zuckerberg and Google’s Larry Page to proceed with caution in their myriad machinelearning experiments. Apple co-founder Steve Wozniak has equally expressed grave concerns: “The future is scary and very bad for people,” he argued. “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?”
While such disconsolate forecasts may be exaggerated, few can deny that, in our ceaseless march towards the age of machine
automation, self-taught algorithms will play a far bigger role in organizing our economic activities. What will happen when the ubiquitous connectivity of sensors and mobile devices converges with such AI as Alphago or IBM Watson? Could a bevy of general-purpose, self-taught algorithms govern the world’s economic transactions?
“The incredible thing that’s going to happen next is the ability for artificial intelligence to write artificial intelligence by itself,” said Jen-sen Huang, co-founder and CEO of Nvidia, whose graphic processing units (GPUS) crunch complex calculations necessary for deep learning. It has been the speedy number crunching that has enabled computers to see, hear, understand and learn. “In the future, companies will have an AI that is watching every single transaction and business process, all day long,” Huang asserted. “As a result of this observation, the artificial intelligence software will write an artificial intelligence software to automate that business process. We won’t be able to do it — it’s too complicated.”
That future isn’t that far into the future. For years, GE has been working on analytics to improve the productivity of its jet engines, wind turbines and locomotives, leveraging the continuous stream of data it collects in the field. Elsewhere, Cisco has set out the ambition of transferring data of all kinds into the cloud in what it calls the Internet of Everything; and tech giants including Microsoft, Google, IBM and Amazon are making their internally developed machine learning technologies freely available to client companies via application programming interfaces (APIS). These machine intelligences — previously costing millions if not tens of millions to develop — essentially have now become reusable by third parties at negligible cost, which will only spur industry adoption at a wider scale.
With unsupervised algorithms quietly performing the instantaneous adjustment, automatic optimization and continuous improvement of ever-more complex systems, transaction costs between organizations are poised to drop dramatically, if not disappear entirely. For this reason, redundancy in production facilities should be radically reduced and the enormous waste that is so prevalent in the global supply chain today should vanish.
Once the coordination of business transactions within and outside an organization speed up, from sales to engineering, from logistics to business operations, from finance to customer service, friction between companies will drop and, consequently, broader market collaboration can then be realized. In an economy where transaction cost approaches zero, traditional propositions such as ‘one-stop shop’ or ‘supply chain optimization’ will no longer be differentiating. These propositions will become commonplace, achievable by even the smallest players or new entrants in all industries.
This is akin to the cheap and powerful cloud computing upon which Netflix, Airbnb and Yelp depend. Until very recently, any Internet businesses needed to own and build expensive servers and resource-intensive data centers. But with Amazon Web Services (AWS) or Microsoft Azure, a start-up can store all of its online infrastructure in the cloud; it can also rent features and tools that are in the cloud, essentially outsourcing all of its computing chores to others. No need to forecast demand or plan capacity — simply buy additional services as requirement goes up. The engineering team of a start-up is therefore freed up to focus on solving problems that are unique to its core business.
Similarly, when fewer resources are required for organizational coordination, being big can only slow things down. No longer will it be credible for big companies to claim conventional advantages by virtue of their being ‘vertically integrated’ (an arrangement in which the companies own and control their supply chains). Instead, they will be under tremendous pressure to match smaller players that are able to specialize in best-in-class services and deliver customized solutions in real time as orders are made. In other words, in the second machine age, big companies need to act small. Howard Yu is the LEGO Professor of Management and Innovation at IMD business school in Switzerland and the author of LEAP: How to Thrive in a World Where Everything Can Be Copied (Publicaffairs, 2018). He appeared on the Thinkers50 Radar list of 30 management thinkers ‘most likely to shape the future of how organizations are managed and led.’
This article has been excerpted from Leap: How to Thrive in a World Where
Everything Can Be Copied by Howard Yu. Copyright © 2018. Available from Publicaffairs, an imprint of Perseus Books, LLC, a subsidiary of Hachette Book Group, Inc.
Transaction costs between organizations are poised to drop dramatically, if not disappear entirely.