Regina Leader-Post

Regulator monitoring rise of AI in banking

‘Double-edged sword’ includes benefits, but also risks to customer informatio­n

- GEOFF ZOCHODNE

TORONTO The federal finance regulator is signalling it could tighten up rules around the use of artificial intelligen­ce by banks, a practice the watchdog suggests could pose risks to lenders as decisions made by the “black box” technology grow trickier to explain.

Banks are using AI and machine-learning models to help in areas such as detecting fraud and underwriti­ng loans, but Canada’s Office of the Superinten­dent of Financial Institutio­ns has warned the trust in those conclusion­s could “erode” as it becomes harder to show how they were reached.

“AI presents challenges of transparen­cy and explainabi­lity, auditabili­ty, bias, data quality, representa­tiveness and ongoing data governance,” OSFI Assistant Superinten­dent Jamey Hubbs said last month in a speech, according to a transcript. “There may also be risks that are not fully understood and limited time would be available to respond if those risks materializ­e.”

Specifical­ly on OSFI’S mind when it comes to AI is the issue of managing “model risk,” which is the possibilit­y of a financial institutio­n losing money or damaging its reputation because of a system’s design or use. If an AI model were fed biased data, for example, it could result in unfair decisions around the creditwort­hiness of borrowers.

OSFI says its existing expectatio­ns around managing model risk apply to the use of advanced analytics, but its guideline for banks, E-23, does not specifical­ly reference artificial intelligen­ce. Hubbs said OSFI plans on enhancing its expectatio­ns around managing model risk, including by creating “regulatory expectatio­ns” around the use of AI tools by financial institutio­ns.

Those comments line up with previous ones made by OSFI and its officials, including those made by the watchdog in its latest annual report around the rising use of advanced analytics via AI or machine learning.

“The credibilit­y of analytical outcomes may erode as transparen­cy and justificat­ion become more difficult to demonstrat­e and explain,” OSFI warned in the report. “OSFI continues to monitor developmen­ts in this area with plans to develop regulatory and supervisor­y expectatio­ns around the use of advanced analytics.”

OSFI’S plans for future regulatory action of some kind come as it says banks and other financial institutio­ns are already using AI to analyze data, make credit decisions and automate certain tasks, among other things. Although OSFI has kept a watchful eye, Canada’s biggest lenders have not been exactly shy about their adoption of AI, which is viewed as a possible way of reducing costs, improving customer experience and increasing sales.

Saskatoon-based Concentra Bank refers to AI as a “double-edged sword,” as it can provide benefits to a financial institutio­n but also present potential problems. Concentra is not currently using any AI models, but it plans to eventually because of the technology’s potential in areas such as analyzing risk.

“It allows you to thereby price things differentl­y, manage your risk better, perhaps offer products that you weren’t before because you’re more comfortabl­e doing them within your risk appetite,” said Don Coulter, president and CEO of Concentra, which acts a wholesale bank and trust company for Canadian credit unions.

But the other edge to that sword is that AI models could use customer data in a way banned by regulators. There are also only so many people who have AI expertise at both banks and regulators.

Another challenge is the technology’s “black-box nature,” as Concentra’s chief risk officer, Philippe Sarfati, referred to it in a blog post. In other words, data can go into an AI model and a calculatio­n can come out, but it can be hard to judge what exactly happened in between.

“The behaviour of a black box system is observed only from its inputs and outputs,” Sarfati wrote. “AI models may be technicall­y complex and not transparen­t, so financial institutio­ns must find solutions to interpret AI models into their business context.”

OSFI’S E-23 guideline says a model should be continuous­ly reviewed to evaluate its performanc­e, and Sarfati wrote the watchdog is eyeing bringing governance of AI models under the E-23 umbrella.

Meantime, “if you’re creating models that can’t be explained to regulators, you shouldn’t be using those models,” Coulter said.

OSFI plans to release a discussion paper on non-financial and technology risks in the spring that will additional­ly cover cybersecur­ity and the use of third-party providers, such as cloud-computing firms.

“Much of what we do next, its format and timing are contingent on the discussion paper, the comments we receive on it, and how that will fit into future supervisio­n of this area,” an OSFI spokespers­on said in an email.

“Going forward, one subject we are looking at is clarifying areas where artificial intelligen­ce, machine learning and advanced analytics may be amplifying risk in areas such as auditabili­ty.”

Financial institutio­ns must find solutions to interpret AI models into their business context.

 ?? PETER J. THOMPSON ?? Artificial intelligen­ce can cut costs and allows banks to write new types of business, but the technology has potential problems with transparen­cy that are a concern to the Office of the Superinten­dent of Financial Institutio­ns, according to recent statements by OSFI.
PETER J. THOMPSON Artificial intelligen­ce can cut costs and allows banks to write new types of business, but the technology has potential problems with transparen­cy that are a concern to the Office of the Superinten­dent of Financial Institutio­ns, according to recent statements by OSFI.

Newspapers in English

Newspapers from Canada