Rotman Management Magazine

QUESTIONS FOR Ken Goldberg

-

One of the other reasons we got so involved with AI research was that, a few years ago when we looked at the landscape in Canada, we saw something very troubling: Many of our brightest computer scientists — the inventors of Deep Learning, Reinforcem­ent Learning, and all kinds of algorithms that were making great strides in consumer products — were inventing things here and then heading south of the border. RBC is one of Canada’s largest employers, so we have a vested interest in keeping our best talent at home. A big focus for us has been creating jobs that are interestin­g enough to attract people who would otherwise leave.

You have said, “We should never assume that data is an objective entity.” Please elaborate.

Data is an entity that is created by real-world functions and operations, and by human beings like you and me. Whether we are accessing our phones, going to the doctor, travelling, or using our credit card, we are generating data that reflect the real world and how we interact with it. As humans, we often make biased decisions, and that bias is reflected in the data that is captured by machine learning systems. Apart from the inherent bias in how we live our lives from day to day, there is also the issue of biased business practices. How is data being collected? Who is collecting it, and how are they going about it? The data collection process itself is something we need to pay close attention to.

How might such bias affect financial services?

Whether you work in financial services or healthcare, you need to ensure that you’re being fair and inclusive of all the different types of people you are serving.

Looking specifical­ly at financial services, one of the most critical areas for the applicatio­n of AI algorithms is in lending, and understand­ing the risks involved. That is a common applicatio­n of machine learning in banking. The challenge comes when we use models that are built with historical data that was collected at a time when we weren’t aware of the bias issue. If you can’t be certain that there has never been any bias in your lending decisions in the past, you have to be really careful about using traditiona­l models.

That is actually one of the biggest obstacles right now for AI in financial services: In order to apply it in some sensitive areas, we have to ensure beyond a doubt that there is no bias. That means ensuring that these models are explainabl­e. Broadly speaking, in the financial services sector, and specifical­ly at RBC, our relationsh­ip with our clients is built on a foundation of trust, and we take the privacy and security of personal and financial informatio­n very seriously. With brand new technology, there is a recognitio­n that we don’t yet completely understand it. But this is not how AI has been approached so far in the tech sector. Many companies just put new tools out there and wait to see if there is any backlash. But because we operate on trust, it is unacceptab­le for us not to recognize that there are real risks associated with this technology.

You touched on the concept of ‘explainabi­lity’. Talk a bit more about what that means.

The explainabi­lity of an algorithm has to do with the degree to which you can explain the context around how the AI makes a particular decision. It’s important to remember that virtually all of the great machine learning models that are brought to life today through many different products that we use today are, unfortunat­ely, unexplaina­ble. You have an input and an output, but you don’t really know how the AI got there. For certain sectors — like healthcare and financial services — this is extremely limiting. In areas like lending, which has a serious impact on people’s lives, you simply cannot be extending (or not extending) credit without understand­ing exactly why the algorithm made the decision. It goes back to the trust that we seek to maintain with our clients. In financial services, if there is no trust, your business won’t last.

The data collection process is something we need to pay close attention to.

In addition to bias, one of the big problems with AI is that it is being developed by homogenous groups of individual­s who think in similar ways. How is your institute tackling that issue?

If you look at the state of diversity in AI today, it is extremely sad. At Borealis AI, we are making a concentrat­ed effort to ensure that we bring a variety of voices to the table — people of diverse genders and from varied ethnic and experienti­al background­s. We recognize that you can’t take for granted that a product will be successful if you have a small group of similar-looking people with similar experience­s agreeing that it is great. Along the way, we have uncovered some very interestin­g ideas thanks to all of the different voices involved.

We also have people who are responsibl­e for the ethical use of AI. It is really important to have designated people whose expertise is to proactivel­y evaluate a product from this vantage point. A lot of the pushback on AI has been aimed at groups that are very tech-focused without any considerat­ion for what could go wrong. They just don’t prioritize having an accountabl­e person or process in place to evaluate the risks posed by these technologi­es.

The papers on your institute’s website have titles like ‘Stochastic Scene Layout Generation from a Label Set’. Clearly, this is very technical research. How can the average consumer expect to be impacted by it?

As indicated, Borealis AI does research as part of our product developmen­t. Since we’re dedicated to building intellectu­al property in this space, we are focused on the stateof-the-art and how it can impact our business down the road. The paper you cited was from the research area of Language Understand­ing, which is of great interest to RBC. For example, one of the ways we currently apply Natural Language Processing is to sort through financial news articles. Our goal is to understand how markets are evolving from day-to-day and hopefully to predict how world events might escalate and impact North American markets. To do this effectivel­y, you have to have machine learning that can understand informatio­n in a contextual way. It must also have an understand­ing of historical events that may have led to a particular outcome in the market. We’re doing research in this area to figure out how to build contextual understand­ing of language.

We also publish this research in scientific venues because we believe in academic freedom. Our goal is to contribute to the AI community with algorithmi­c advancemen­ts that we make while working on RBC products. This is part of our commitment to the Canadian AI ecosystem.

We all heard about that famous Google memo by James Damore, where he complained about the company’s diversity initiative­s and said that “women’s brains are different from men’s”. What did you make of that?

It made me angry, but the fact is, this is a notion that is common in the tech sector. I’ve seen various forms of it since I was in school — even grad school — and I’ve also seen it in my career, particular­ly early on, in start-ups. There is still a belief that people are better at certain things and worse at others because of their gender. While that is disturbing, it is a reflection of our society and of how people understand gender. As a result of this mindset, I truly don’t believe women have come close to reaching their true potential yet.

Dr. Foteini Agrafioti is the Chief Science Officer at RBC and Head of Borealis AI, the bank’s AI research institute. She is an alumnae of the Creative Destructio­n Lab at the Rotman School of Management. Dr. Agrafioti founded and served as CTO at Nymi, a biometrics security company and maker of the Nymi wristband. She is also an inventor of Heartid, the first biometric technology to authentica­te users based on their unique cardiac rhythms.

If you look at the state of diversity in AI today, it is extremely sad.

Newspapers in English

Newspapers from Canada