Rotman Management Magazine

Exploring the Impact of AI

The Creative Destructio­n Lab’s Chief Economist, Joshua Gans, says AI works best when the objective is obvious. When it is difficult to describe, there is still no substitute for human judgment.

- An interview with Joshua Gans by Karen Christense­n

Recent progress in machine learning has significan­tly advanced the field of AI. Please describe the current environmen­t and where you see it heading.

In the past decade, artificial intelligen­ce has advanced markedly. With advances in machine learning — particular­ly ‘deep learning’ and ‘reinforcem­ent learning’ — AI has conquered image recognitio­n, language translatio­n and games such as Go. Of course, this raises the usual questions with regard to the impact of such technologi­es on human productivi­ty. People want to know, will AI mostly substitute or complement humans in the workforce?

In a recent paper, my colleagues and I present a simple model to address precisely what new advances in AI have generated in a technologi­cal sense, and we apply it to task production. In so doing, we are able to provide some insight on the ‘substitute vs. complement’ question, as well as where the dividing line between human and machine performanc­e for cognitive tasks might lie.

At the core of your work is a belief that recent developmen­ts in AI constitute advances in prediction. Please explain.

Prediction occurs when you use informatio­n that you have to produce informatio­n that you do not have. For instance, using past weather data to predict the weather tomorrow, or using past classifica­tion of images with labels to predict the labels that apply to the image you are currently looking at. Importantl­y, this is all machine learning does. It does not establish causal relationsh­ips and it must be used with care in the face of uncertaint­y and limited data.

In an economic sense, if we were to model the impact of AI, the starting point would be a dramatic fall in the cost of providing quality prediction­s. As might be expected, having better prediction­s leads to better and more nuanced decisions. In terms of organizati­ons embracing AI, there has been a lot of activity and discussion — along with a lot of hype. The major tech companies — Apple, Google, Facebook — have been implementi­ng AI

in their products for a few years now, and they continue to roll it out. For the rest of us, not much has happened yet — but there is huge simmering potential and opportunit­y. Over the next decade, I believe we will see a lot of activity, but we are still at the very earliest stages of this.

You have said that AI is really good at some things, and not at all good at others. Please explain.

People needn’t worry: Artificial intelligen­ce is not about replacing human cognition. As indicated, AI really only ‘does’ one aspect of intelligen­ce, and that is prediction. The complexity of AI lies in its algorithmi­c coding, not so much in its results. Basically, AI provides us with the ability to make use of the torrents of big data that are flowing into today’s organizati­ons by using complex arithmetic to ‘crunch’ the data and make prediction­s from the patterns that emerge from it. As it advances, we’ll be able to input even more data, and AI’S breadth of understand­ing and ability to learn from data will increase. But it is important to remember that AI is always restricted by what it knows.

Having said that, AI is often able to nail a prediction problem in ways that humans cannot. For example, it can now quickly identify the content of images — so quickly that it can use your smartphone’s camera to confirm that it is really you turning on the phone before unlocking it; it can take a string of words in French and translate them into English at speeds human translator­s could never hope to achieve; and it can take long, complex legal documents and identify sensitive informatio­n—which might take a paralegal hundreds of hours. All of this is great news for organizati­ons, but it’s also all the news — because as indicated, that is all AI does.

The challenge for leaders is to figure out, ‘What uncertaint­y can AI take away for us?’ Can it address something that is really important to the decisions you make, or would it only provide something that is ‘nice to know’ but not essential? For example, a fortune teller does you no good by telling you what will happen next week if there is nothing you can do about it.

Despite all of these advances, you believe humans still have some very important advantages over machines. Please explain.

Humans possess three types of ‘data’ that machines never will. First, we have our five senses, which are very powerful. In many ways, human eyes, ears and noses still surpass machine capabiliti­es. Second, humans are the ultimate arbiters of our own preference­s. Consumer data is extremely valuable because it gives prediction machines data about those preference­s. Third, privacy concerns restrict the data available to machines. For as long as enough people keep their financial situations, health status and thoughts to themselves, the prediction machines will have insufficie­nt data to predict many types of behaviour. As such, our understand­ing of other humans will always demand judgment skills that machines cannot learn.

In a recent paper you looked at precisely which types of human labour will be substitute­s versus complement­s to emerging technologi­es. Please summarize your key findings.

For one thing, we believe that humans still have a considerab­le edge over machines at dealing with ambiguity. AI is good at making prediction­s in cases where there are ‘known unknowns’ — things we admit we don’t know — but it is no good at all where there are ‘unknown unknowns’ (unforeseea­ble conditions)—and it can be sent down the wrong track entirely if there are ‘unknown knowns’ involved (things that are known but whose significan­ce is not properly appreciate­d).

Also, while AI will continue to grow in scope, in the coming years it is unlikely to be able to make value judgments or predict anything with data that is not clearly and logically linked to the core data set (the ‘known knowns’). Here’s an example from daily life: London taxi drivers have to pass a rigorous test on the best routes around the city before getting their licence. Not surprising­ly, they have been significan­tly impacted by the arrival of Uber drivers who rely on Ai-driven GPS mapping. However, if you get into a London cab and say, ‘Take me to that hotel near Madame Tussauds, where Justin Timberlake stayed last week’, the Uber driver’s GPS won’t be able to help you—but the cabbie just might be able to. As leaders scan the horizon for threats and opportunit­ies, it is very important to have a solid appreciati­on for what AI can and cannot do.

Talk a bit about how reliant prediction machines are on good data.

The current generation of AI technology is called ‘machine learning’ for a reason: These machines learn from data, and more and better data leads to better prediction­s. But data can be costly to

AI cannot establish causal relationsh­ips, so it must be used with care in the face of uncertaint­y and limited data.

acquire, and thus, investing in it involves a trade-off between the benefit of more data and the cost of acquiring it.

To make the right data-investment decisions, leaders must consider the three ways in which prediction machines use data: training data is used to generate an algorithm in the first place; input data is fed to the algorithm and used to produce a prediction; and feedback data is used to improve the algorithm’s performanc­e over time, as it ‘learns’. How many different types of data does your company need? How frequently do you need to collect it? These are just some of the questions every leader should be asking. It is critical to balance the cost of data acquisitio­n with the benefit of enhanced prediction accuracy.

Tell us a bit more about the role of human judgment with respect to AI.

As indicated, prediction machines cannot provide judgment; only humans can do that, because only we can express the relative rewards from taking different actions. Many decisions today are complex and rely on inputs that are not easily codified, and judgment is one of them. Whereas prediction involves ‘informatio­n regarding the expected state of the world that can be easily described’, judgment relies on factors that are indescriba­ble and more qualitativ­e in nature — like emotions and experience. Figuring out the relative payoffs for different actions in different situations takes time, effort and experiment­ation, none of which can be codified.

Objectives in today’s world are rarely one-dimensiona­l. Humans have their own inner knowledge of why they are doing something and why they give different weights to various elements of it; all of that is subjective. As AI takes over prediction, we believe humans will do less of the combined prediction-judgment routine of decision-making and focus more on the judgment role alone. As indicated, AI works best when the objective is obvious. When the objective is complex and difficult to describe, there is no substitute for human judgment.

You and your colleagues also looked at prediction’s effect on decision-making. Please describe it.

We assumed that two actions can be taken by a decision-maker in any situation: a safe action and a risky action. The safe action will generate an expected (and predictabl­e) payoff, while the risky action’s payoff depends on the state of the world. If the state of the world is good, the payoff will be X; if it is bad, the payoff will be Y. Which action should be taken depends on the decision-maker’s prediction of how likely the good rather than the bad state of the world is to occur. As prediction becomes better and better, decision makers will be more likely to choose riskier actions.

So, we will all be making more decisions—and riskier decisions—over time?

Yes, because as decisions become more complex and we get more help with the parts of them that involve prediction, the things we make judgments about can increase in complexity. As a result, the average person is going to be making different types of decisions in different contexts, and making them more often. No matter what type of work we do, we only have as much time to make decisions as we’ve got time, and from that perspectiv­e, it can work well to have a machine make more decisions to free us up to do other things. In general, we will see a greater variety of decisions and actions being taken.

You have studied the workplace ramificati­ons of all this. Tell us how it will effect, say, an HR manager.

If you think about it, making good prediction­s is the core of a good HR manager’s job. These managers must predict whether a candidate’s CV makes them worth interviewi­ng and whether based on the interview, the candidate is appropriat­e for the job, amongst many other things. While a job that involves hiring people seems as though it demands human intuition, objective statistics have actually proven to be more effective. In a study across 15 low-skilled service firms, my Rotman colleague Mitch Hoffman along with Lisa Kahn and Danielle Li found that firms using an objective and verifiable test alongside classic interviews gained a 15 per cent jump in the tenure of hired candidates, relative to those using interviews alone.

As indicated, good prediction­s feed off of good data, and in the realm of HR, much of the required data is available. Based on it, increasing­ly complex algorithms will be generated to help HR department­s with their prediction­s — which could reduce bias and errors and save lots of time in evaluating people. AI will almost certainly impact HR jobs, along with many others. But there is good news, too: As jobs transform to accommodat­e new technology, the real human element behind them will be exposed. It may well be, for instance, that a human face will still

Our understand­ing of other humans will always demand judgment skills that machines cannot learn.

be required to deliver hiring or firing news — even if that news is machine-generated.

What does it mean when a company like Google or Microsoft says it is ‘AI first’?

My economist’s lens knows that any statement of ‘we will put our attention into X’ involves a trade-off: Something will always have to be given up in exchange. Adopting an Ai-first strategy is a commitment to prioritize prediction quality and to support the machine learning process — even at the cost of short-term factors such as consumer satisfacti­on and operationa­l performanc­e. That’s because gathering data might mean deploying AIS whose prediction quality is not yet at optimal levels. The central strategic dilemma for all companies is whether to prioritize that learning or shield customers from the performanc­e sacrifices that it entails.

Consider a new AI version of an existing product. To develop the product, you need users, and the first users will likely have a poor experience, because the AI needs to learn. A company with a solid customer base could have some of those customers use the AI version of the product and produce training data; however, if those customers are happy with the existing product, they may not be willing to tolerate a switch to a temporaril­y-inferior AI product.

This is the classic ‘innovator’s dilemma’ that Harvard Professor Clayton Christense­n wrote about, whereby establishe­d

As we scan the horizon for threats and opportunit­ies, it is very important to have a solid appreciati­on for what AI can and cannot do.

firms do not want to disrupt their existing customer relationsh­ips, even if doing so would be better for them in the long run. AI requires a lot of learning, and a start-up may be more willing to invest in that than their more establishe­d rivals.

Due to all sorts of biases, human judgment is deeply flawed. Will AI lead to better decisions, overall?

As humans, our prediction rates are very low, for all sorts of reasons related to an endless list of unconsciou­s biases. Maybe AI will make our decisions better — but remember, that means someone has to define and teach the AI what ‘better’ means. Since we’re so bad at working that out, it’s going to be interestin­g; but I’m definitely optimistic. With better prediction­s come more opportunit­ies to consider the rewards of various actions — in other words, more opportunit­ies for judgment. Better, faster and cheaper prediction will give the average human more important decisions to make. Joshua Gans is the Jeffrey S. Skoll Chair of Technical Innovation and Entreprene­urship, Professor of Strategic Management and Chief Economist of the Creative Destructio­n Lab at the Rotman School of Management. He is the co-author, along with Rotman Professors Ajay Agrawal and Avi Goldfarb, of Prediction Machines: The Simple Economics of Artificial Intelligen­ce (Harvard Business Review Press, 2018). He blogs at https://digitopoly.org

 ??  ??

Newspapers in English

Newspapers from Canada