Rotman Management Magazine

A machine learning veteran describes the quest for ‘inclusive intelligen­ce’.

- Interview by Karen Christense­n

Not everyone is on board with AI and machine learning. None other than Elon Musk has suggested that it might lead to World War III. What is your take on the situation?

You’re absolutely right that there is a huge amount of alarmism and ‘automation anxiety’ right now. Elon Musk, the late Stephen Hawking and Bill Gates have all made frightenin­g prediction­s about super-intelligen­t machines as an existentia­l threat to humans and jobs. These prediction­s are exaggerate­d and counterpro­ductive. The fact is, we are very far from achieving artificial general intelligen­ce (AGI). It’s important to be thoughtful about how we use AI, and in particular, to consider ethics, fairness and how any technology might be abused. But the idea of superintel­ligence coming to dominate humans is science fiction. Rest assured: Humans have many good years left.

Instead of embracing the notion that robots will eventually surpass and replace us (‘singularit­y’), you have introduced the concept of ‘complement­arity’. Please define it.

In contrast to the fear of robots taking over and becoming superior to humans, complement­arity emphasizes the positive potential for AI to complement human abilities by reducing drudgery — giving us more time to do what we do best.

One of these things is the ability to empathize. Understand­ing how someone is feeling is a uniquely human quality. We are nowhere close to having a machine be capable of that, because to feel empathy, by definition, you have to be able to put yourself into the position of a human being. AI and robots will never be able to do that.

Another uniquely human trait is creativity. I have yet to see any evidence that AI or robots can do anything truly creative. They may be able to assist us in creative endeavours — for instance, by helping us quickly visualize designs or looking up facts, which is extremely valuable — but that doesn’t translate into a computer replacing a creative person in their job. I know that some people believe journalist­s, doctors, and lawyers can eventually be replaced by AI, but in my view that is not even remotely possible. The nuances of communicat­ion that these jobs require is far beyond the capabiliti­es of AI.

In addition to job loss, many people are concerned about algorithmi­c bias — whereby AI algorithms cause harm to under-represente­d groups in society. Can you touch on that, along with your idea of ‘inclusive intelligen­ce’?

At UC Berkeley, we are trying to build technology that is inclusive in the sense that it is inherently thoughtful about people who may be vulnerable or disadvanta­ged. For instance, AI is currently being considered for things like mortgage decisions and healthcare diagnostic­s. In both cases, there are vulnerable population­s involved, so it is very important to be aware of the potential for inherent bias in the data used to train AI systems. These systems can easily be misused, because they are often treated as a ‘black box’ that generates outcomes without explanatio­ns. If you just blindly follow these systems, you can end up with very biased and unfair outcomes that can have severe consequenc­es.

Another thing you’ve said is that ‘complement­arity can also enhance diversity’. Please explain.

There is a technique in AI known as a ‘random forest’, which was developed at UC Berkeley in 2001. Random forests are still widely used and are one of the leading methods for AI and machine learning. In essence, they extend the concept of a decision tree to classify data, but instead of one tree, the idea is to generate many trees. The developers proved that a random forest is always superior to a single decision tree, as long as the trees are sufficient­ly diverse. This is formal proof of something that we are also seeing evidence of in the realm of human interactio­ns: That diverse teams perform better than homogenous teams.

The rationale in that arena is that homogenous teams — even if they are made up of the smartest people in the world — have similar background­s and experience­s, and as a result, an ‘echo chamber’ is created: Group members don’t question their assumption­s, which often leads to conclusion­s, designs, or directions that are not as creative or innovative as they could be. As with random forests, diversity in groups is extremely valuable. It is a fundamenta­l property of both collaborat­ion and complement­arity.

For more than 30 years, you have been working on a very particular problem. Please describe it.

For many years now, my students and I have been studying the problem of robot grasping. This is an extremely difficult problem — even though grasping an object is extremely easy for humans. You may be holding a pen or a cell phone in your hand right now. We do these things effortless­ly. We pick things up, we can hold multiple items at once, we can twirl a pen in our fingers. People don’t appreciate how difficult this is for robots.

Robots are still incredibly klutzy, and the fundamenta­l problem here is uncertaint­y. This uncertaint­y arises from a variety of sources, but in particular, it is due to errors in sensing, errors in control, and uncertaint­y in the physics of grasping itself. All of this combines to make contact between a robot and an object in the environmen­t very difficult to predict exactly. Even tiny, miniscule errors can make all the difference between a robot holding something securely and dropping it.

Humans are far superior to machines in terms of general intelligen­ce and dexterity.

We have been trying to develop methods that improve robot dexterity, and while we have made some progress in recent years thanks to advances in Deep Learning, don’t expect a dexterous robot in the next five or 10 years.

Broadly speaking, what are machines really good at right now?

Machines are very good at identifyin­g patterns and sifting through vast amounts of data. They’re also really good at doing computatio­ns correctly — far better than humans. And they are very systematic and very good at vigilance — which means you can have a system with a camera that just watches a door continuous­ly, 24/7. And obviously, machines have much higher strength than humans.

As a result of these strengths, it’s not surprising that we have machines that surpass humans in certain aspects. But humans are far superior in terms of general intelligen­ce that allows us to reason in complex, novel situations and in terms of dexterity, which allows us to manipulate objects that we haven’t encountere­d before. I am convinced that this will remain the case for quite some time into the future — for at least 20 and maybe 50 to 100 years.

Going forward, do you think most people will embrace AI and machine learning?

I do think we’re going to see real advances and benefits in many industries, from healthcare to transporta­tion. In terms of being able to filter data, we are already enjoying the benefits of very fast algorithms for performing search. Most people don’t realize that this is a form of artificial intelligen­ce. We’ve also readily embraced things like Waze and Google maps, which can route traffic extremely efficientl­y because they have access to so much data. Those are just two examples of tools that we are already using every day.

I do not think we’re going to see a self-driving taxi in our lifetime, because it is extremely difficult to solve the engineerin­g challenges involved in driving a car in an urban environmen­t. I do think we’ll get better and better tools for driving on highways, which will be very valuable. But we’re not going to suddenly replace human drivers.

There has been a huge increase in e-commerce and online shopping, and there are many advantages to that. For one, people in rural areas can now access a huge selection of products at reasonable prices. The challenge is how to manage the delivery of all those orders, and as indicated, we’re working on developing robots that can assist humans in the warehouse to grasp items and package them for delivery. Again, this isn’t going to wipe out all the warehouse jobs; in fact, I think we’re going to have a shortage of human workers.

Overall, I’m not worried about robots or AI as a threat to humans. Anyone whose work involves providing human interactio­n, being creative, or doing anything that requires dexterity is safe.

Ken Goldberg is the William S. Floyd Jr. Distinguis­hed Chair in Engineerin­g at UC Berkeley and holds secondary appointmen­ts in Electrical Engineerin­g and Computer Science, Art Practice, and the School of Informatio­n. He also holds an appointmen­t in UC San Francisco’s Department of Radiation Oncology and is CEO of Ambidextro­us Robotics.

Anyone whose work involves human interactio­n, creativity, or dexterity is safe.

 ??  ??

Newspapers in English

Newspapers from Canada