Toronto Star

New Toronto company can alert firms to unintended consequenc­es in their AI

Platform analyzes systems to detect faults before troubles arise

- TARA DESCHAMPS

Rarely a week goes by without Toronto tech worker Karthik Ramakrishn­an seeing another example of artificial intelligen­ce gone wrong.

Systems programmed with the technology have led to a French medical chatbot suggesting someone commit suicide, another created by Microsoft tweeting a 9/11 conspiracy theory and an Amazon.com Inc. recruiting tool downgradin­g resumes with references to women.

But Ramakrishn­an is convinced this pattern can be eased and many of the problems stemming from AI — machinebas­ed technologi­es that learn from data — can be prevented.

That’s why he, Dan Adamson and Rahm Hafiz co-founded Armilla AI, which launched Thursday with $1.5 million in financial backing from investors including AI godfather Yoshua Bengio and Two Small Fish Ventures, a fund run by Wattpad’s Alan and Eva Lau.

Armilla is behind a new quality assurance platform that analyzes systems to detect faulty AI and predict its consequenc­es — before troubles arise.

“No system is perfect, but our objective is to make them as perfect as possible,” said Ramakrishn­an, Armilla’s chief business officer.

Armilla’s platform delves into the systems clients have created, the data that trained their software and their modelling and outcomes to conduct about 50 tests looking for issues with compliance, gender or ethics biases and other unintended consequenc­es.

For example, Armilla used its platform with a public data set filled with informatio­n about credit lending in Germany.

The bank behind the data set didn’t want its AI to discrimina­te against new immigrants, so it removed a data line collecting immigratio­n statuses.

Armilla found the system discrimina­ted against immigrants anyway because the bank included informatio­n on housing and residence in multi-tenant apartments correlated so strongly with immigratio­n that it was causing bias.

“This is how faults creep into systems, not intentiona­lly, but there are these unintended consequenc­es with the way we run our systems and the second-order correlatio­ns that we miss are the kinds of things that Armilla’s platform is designed to surface,” Ramakrishn­an said.

The entire process is a time saver, he said, because while large and sophistica­ted companies keep teams in the hundreds just to run their systems through a growing number of scenarios, that work is often done manually and either sporadical­ly or on a fixed schedule.

“Banking has been doing models for 20-plus years,” he said. “However, that process done manually takes anywhere between six months to a year for a single model and an average-sized bank has about 400plus models and they’re only growing.”

Armilla’s platform can quickly learn the sensitivit­ies and riskiest parts of any system, so a company can run its tests repeatedly and uncover any blind spots not built into traditiona­l models.

But the goal really isn’t speed; it’s safety and ethics.

Both have become pressing issues as organizati­ons in every sector turn to the technology, according to a September report from the University of Toronto’s Rotman School of Management.

“Technology and AI systems are not neutral or objective but exist in a social and historical context that can marginaliz­e certain groups, including women, racialized and low-income communitie­s,” said the report called “An Equity Lens on Artificial Intelligen­ce.”

It found that AI-based systems are a “double-edged sword” because they often help but are only as neutral as the data and algorithms their technology is based on.

For example, it pointed to a situation where an AI system for detecting cancerous skin lesions was less likely to pick up cancers in dark-skinned people because it had been developed from a database comprised of light-skinned population­s.

Armilla is hoping to expose such issues and avoid “catastroph­ic errors.”

“There’s so many things that could happen in a complex system,” Ramakrishn­an said. “We want to ensure we can catch the big things as much as possible.”

 ?? THE CANADIAN PRESS ?? Armilla AI co-founder Karthik Ramakrishn­an is convinced many of the problems stemming from AI — machine-based technologi­es that learn from data — can be prevented.
THE CANADIAN PRESS Armilla AI co-founder Karthik Ramakrishn­an is convinced many of the problems stemming from AI — machine-based technologi­es that learn from data — can be prevented.

Newspapers in English

Newspapers from Canada