Rotman Management Magazine

Vinod Khosla

Thought Leader Interview:

- By Ajay Agrawal

by Ajay Agrawal

In a 2017 essay (“AI: Scary For The Right Reasons”), you wrote that AI might improve metrics like GDP growth and productivi­ty, but at the same time, it may worsen less visible metrics such as income disparity. Are you still concerned about that?

Even more so. Without a doubt, AI is the most important technology we have seen in a very long time. Some people even refer to it as ‘the last technology’, because it will likely be responsibl­e for all of the technologi­es that follow. As such, it presents massive potential for contributi­ng to society. Having said that, where we get to will depend on the path we take.

It’s great to talk about creative destructio­n if you’re the one doing the disrupting; but if you’re the one being destroyed, it isn’t much fun. Disruption is always unpleasant for someone, and in the coming years, it will take its toll on jobs. The core issue is that ‘efficiency’ in the business world generally means reducing costs, which results in replacing lower-wage, less-skilled workers with far fewer well-paid, highly skilled people. Because of this, I do worry that the machine learning revolution will lead to increasing income disparity — and that disparity beyond a certain point could lead to social unrest.

On the positive side, the jobs we all covet are jobs that we would do even if we didn’t get paid to do them, and that is the long-term potential of AI. It could eliminate the need for unsatisfyi­ng work. However, long before we get there, we will have to go through the dynamics of shifting from today’s economy into its next iteration. The path ahead will be extremely uneven, and as a result, society may push back on these technologi­es.

You have said that lower-skilled jobs like truck drivers and food services are actually less at risk from automation than radiologis­ts and oncologist­s. Please explain.

I do think some of the higher-skilled, knowledge-based jobs will be the easiest ones to replace with AI. For example, radiologis­ts are toast. That just should not be a job anymore. I would go even further and say that any radiologis­t who plans to practice 10 years from now will be practising out of their own arrogance, because they will be misdiagnos­ing patients much more frequently than Ai-driven systems. Likewise, oncology is going to be easier to automate than a factory worker, because a factory worker has much more dimensiona­lity. Just within the

Khosla Ventures portfolio alone, entreprene­urs are already trying to use machine learning to replace human judgment in areas from financial services to farming, law and cardiology — and our portfolio represents just a tiny fraction of the efforts underway. With less and less need for human labour and judgment, labour will be increasing­ly devalued relative to capital — and even more so relative to ideas and machine learning technology.

What is the timeline for rolling out these technologi­es across industries?

Most people might agree that AI taking over radiology is a done deal — that already, there is nothing a radiologis­t can do that AI can’t, and that AI is both cheaper and more accurate. But the fact is, only about one per cent of radiologis­ts are currently using these technologi­es. There is a significan­t time period for rollout that is important to consider in looking at all of this. My bet is that we are 10 years from AI being widely embraced in radiology, and maybe 15 years from the point where we don’t really need oncologist­s anymore.

Just about all specialty expertise will be provided by AI, and this is actually good news for some people. When it happens, your primary care physician will be in the best position to look after you. The average patient has seven ongoing health conditions, yet each specialist has no idea about the others, nor any contact with them. In the future, the role of a GP will be to provide integrativ­e care. As a result, medical schools should start focusing on recruiting and training individual­s with high emotional intelligen­ce and empathy, because their future will largely involve managing patients rather than determinin­g medical interventi­ons.

You believe the rollout for AI will vary between industries. How long before assembly lines are entirely robotic?

I think we are five to 10 years away from being able to completely replace human workers on assembly lines. And by the way, this represents what is probably the largest market in the world. One trillion dollars would be an understate­ment, and there may be some important indirect effects. For example, if you build an assembly line robot that doesn’t need any programmin­g to do a task like ‘assemble an iphone’ — if it can just learn by watching a dozen or so examples and can then perform better than a human worker — that could result in an inversion of the supply chain. All of those manufactur­ing jobs that moved to China might actually come back to the West, because of course, it is more cost effective to have your manufactur­ing done locally.

Between assembly line robotics and 3D printing, we could see a complete inversion of the supply chain. In fact, 40 years from now, there might be no need for jobs of any type. If that happens, government­s will have to turn their attention away from job creation and figure out how to help people find meaning in their lives.

How far off is the income-disparity crisis that you have warned of?

There are many variables involved — policy being one of the larger ones. But in terms of AI, I believe it will be the biggest variable 20 or 30 years out. While the future looks promising in terms of increased productivi­ty and abundance, as indicated, the process of getting there raises all sorts of questions about the changing nature of work.

Clearly, there are also significan­t implicatio­ns for education. I suspect that if and when software systems exceed the capability of the average — and eventually the smartest — humans in judgment and skill, the avenue of personal growth through education that has traditiona­lly been open for career advancemen­t may close. I gave a talk at a National Bureau of Economic Research meeting recently, and former Harvard President Larry Summers came up to me afterwards and said, “You just blew my solution for countering the effects of AI!” Most people don’t realize that education is not the solution here.

Is the solution to slow down technologi­cal change in order to preserve jobs?

Definitely not, but we do need to address the issue of income disparity. The easiest answer seems to be what Economist Thomas Piketty has advocated for: some form of income redistribu­tion. I suspect that will be a necessary component. We also need to look at our capitalist system, which is filled with arbitrary policies in favour of either labour or capital. When you allow certain partnershi­p structures or you don’t provide tax credits, you are advantagin­g certain kinds of activities and disadvanta­ging others. We need to look at that very closely and make some changes.

For example, giving an R&D tax credit to companies favours innovation, whereas giving favourable depreciati­on is a bias towards capital instead of labour. Keep in mind that large corporatio­ns tend to shape most rules and regulation­s — at least in the U.S. — so many of these biases have been engineered into today’s economy. But the cost of labour and the cost of

capital can be effectivel­y altered by some simple changes in rules, regulation­s and laws. More significan­t manipulati­on will be required to achieve reasonable income disparity goals.

Social mobility is a tougher goal to engineer into society’s rules. I suspect the situation will become even more complex as traditiona­l economic arguments of labour versus capital are upended by a new factor many economists don’t adequately credit: the economy of ideas driven by entreprene­urial energy and knowledge. This factor may become a more important driver of the economy than either labour or capital. Of course, all of this is mere speculatio­n. The future is nearly impossible to predict.

Are there things leaders should be doing that they aren’t currently doing?

Despite all of the dramatic benefits it offers, there isn’t nearly enough investment in AI. But that is only a matter of time; it will happen. I think what would help the most right now is broad social acceptance and adoption. We need to condition people for the consequenc­es of AI and think carefully about how to roll it out so that the most disadvanta­ged in society are not disproport­ionately affected. Indeed, the hope is that they will be positively affected.

Either way, as indicated earlier, we are going to need a version of capitalism that is focused on more than just efficient production — one that focuses on the less desirable side effects of capitalism. We need to adjust the playing field, and hopefully some of this work can be initiated in the academic world.

I mentioned universal basic income, and I think that could make the adoption of new technologi­es much easier. At one point, Bill Gates talked about a robot tax — placing a tax on every robot adopted by an organizati­on. In the environmen­tal movement, as carbon reduction has become a widespread goal, people have talked about carbon taxes that disproport­ionately benefit the bottom quartile of society. I don’t think we are studying these kinds of mechanisms enough.

To what extent should we be worried about the influence of machine intelligen­ce on the sphere of public opinion?

Of course, this is already happening. The 2016 U.S. Presidenti­al election was one example, but Brexit has been the most visible example to date — because in that case, it actually changed the outcome. Cyberwarfa­re powered by AI is now one of the most powerful weapons for launching what traditiona­lly would have been very visible attacks on other countries. Developing a better air force or bigger nuclear bombs had much more transparen­cy. As a result, you could actually have agreements between nations: ‘You promise not to develop X, we promise not to develop it, and we will both be able to verify that over time’. With AI, there is no verifiabil­ity. The battle between nations can now be conducted silently and without any transparen­cy. That is a danger that I worry about more immediatel­y.

On an individual scale, we already have the ability to hack into any human mind if we have enough interactio­n with the person. This is what is happening on Facebook when somebody sells you a pair of jeans that you didn’t think you wanted. Using machine learning, the seller is reverse-engineerin­g a narrow part of your mind, making you do something they want you to do. A simplistic view of this type of activity is, ‘We’re just selling more stuff to more people’; but the more dangerous lens is, ‘We’re getting you to behave in a specific way’. This is a real danger, and I don’t know if there is an easy solution.

In terms of creating AG (artificial general intelligen­ce), who do you suspect will get there first?

We were the only venture investor in Openai, the for-profit company whose stated aim is to promote and develop friendly AI in such a way as to benefit humanity as a whole. As an investor, I like situations where the chances of success are less but the impact could be extremely consequent­ial. This was the only $50 million dollar cheque we have ever written, so clearly we believe it is one strong possibilit­y. Vicarious is a small company working on AGI for assembly line robotics. Deepmind is clearly doing some stunning work and there are some interestin­g efforts underway within Google. There’s also a lot going on in China, which has a clear national focus on winning the AGI race. I am generally optimistic that one of these technologi­es will win in a really big way — but I believe more than one of them will succeed in creating AI that does more and more of the economical­ly valuable human functions that we need done.

Whose responsibi­lity is it to protect everyday citizens from AI’S negative effects? Is it up to government­s?

Like most people, I hate government solutions because they are engineered to narrowly benefit some citizens but not others. Creating effective policy is great to talk about theoretica­lly, but it is very rare. There may be other solutions that could change the playing field. We talked about capitalism before, and it’s a philosophy we all buy into. Historical­ly, why has capitalism been

important? Because it increases economic efficiency. You just have to compare North Korea with South Korea to see the difference economic efficiency can make. Having said that, we are moving into an era where efficiency and productivi­ty will no longer be major variables in success. The emerging form of capitalism is more about generating demand and making you want things you didn’t know you wanted than it is about producing the things that you need.

As we consider changes to the system, we need to be very careful. In complex systems in general — whether it be the global economy or software code with millions of lines — there are always holes or bugs, and as a result, such systems bring with them the possibilit­y for unintended effects. This danger exists with AI, too but neverthele­ss, I am generally optimistic that we can contain the negative effects. If you think about it, every powerful technology in the history of the world could be used for either good or evil. The danger always exists, but that is not a reason to slow down progress.

You also believe that technology can—and should—help to reinvent societal infrastruc­ture. How so?

Today, 700 million people — the top 10 per cent of the world’s population — enjoy a rich lifestyle in terms of environmen­t, healthcare, housing, food and education. The other 90 per cent wants what we have, and technology is the only way to make that a reality. We need a 10x in resource utilizatio­n multiplica­tion — not a 10x in the number of doctors, or buildings or cars. Technology has the potential to achieve food goals, reshape cities, cure disease, mitigate climate change and enhance human capability.

The future is not knowable, but it is ‘shapeable’. I believe the amount of innovation we will see over the next 10 years will explode by 10x over what we’ve experience­d. That’s because as axes of innovation increase, possibilit­ies for solutions increase. And at the same time, the cost of experiment­ation is going down. That’s why I believe we are at the very beginning of a hypercycle of innovation. The tools available now can be combined in an endless number of ways to innovate: AI and data collection, 3D printing, quantum computing, robotics, social networking, genomics, dematerial­ization, to name just a few. When you combine medical imaging with AI; when you combine 3D printing with new material science, you get very different things. And this won’t be temporary: I believe we are entering a permanent hypercycle of innovation.

Any parting advice for entreprene­urs, Ai-focused and otherwise?

First, keep at it, because most major disruption­s are non-institutio­nal, and the world really needs you and your ideas. Second, in my experience, the more money you raise, the less likely you are to succeed. That’s because when there is little money, it forces you to think much harder about the problem you are trying to solve. If you get a lot of funding, you tend to start executing without analyzing the problem sufficient­ly. You have to be hypereffic­ient with your dollars to be more creative with the problem. And third, always remember, technology doesn’t rule: It serves. We get to decide its goals.

Vinod Khosla is a co-founder of Sun Microsyste­ms and the founder of Khosla Ventures, which invests in technology-based businesses both for profit and social impact, including clean tech, microfinan­ce and biomed tech. Ajay Agrawal is the Geoffrey Taber Chair in Entreprene­urship and Innovation and Professor of Strategic Management at the Rotman School of Management, where he founded the Creative Destructio­n Lab, a seed-stage program for scalable, science-based companies. CDL now has seven locations: Toronto, Vancouver, Calgary, Montreal, Halifax, Oxford, UK and Paris, France.

 ??  ??

Newspapers in English

Newspapers from Canada