Toronto Star

Taking immigratio­n to AI

Experts worry over decisions by algorithms that contain same old human bias

- NICHOLAS KEUNG IMMIGRATIO­N REPORTER

Wanted: A contractor to help immigratio­n officials use algorithms and datamining to assess the personal risks of sending a failed refugee claimant home and to calculate if a migrant is wellestabl­ished enough to stay in Canada on humanitari­an grounds.

This is not a fictional ad but a tender notice recently issued by Ottawa to explore the potential use of artificial intelligen­ce and data analytics in Canada’s immigratio­n and refugee system.

According to a new University of Toronto study, the job ad is just the latest example of government replacing human decision-making with machines — a trend it says is creating “a laboratory for high-risk experiment­s” that threaten migrants’ human rights and should alarm all Canadians.

“The nuanced and complex nature of many refugee and immigratio­n claims may be lost on these technologi­es, leading to serious breaches of internatio­nal

and domestical­ly protected human rights, in the form of bias, discrimina­tion, privacy breaches, due process and procedural fairness issues,” warned the 88page report being released Wednesday by U of T’s Internatio­nal Human Rights Program and Citizen Lab.

“These systems will have life-and-death ramificati­ons for ordinary people, many of whom are fleeing for their lives.”

Concerned about the human impact of automated systems, researcher­s dug into public records such as public statements, policies and media reports of the federal government’s adoption of the technologi­es in the immigratio­n system.

More than 30 experts were consulted, including computer scientists, technologi­sts, lawyers, advocates and academics from Canada, the U.S., Hong Kong, South Korea, Australia and Brazil.

Researcher­s have also submitted 27 separate access to informatio­n requests to eight different government department­s and agencies, but have yet to receive any data or response.

Study co-author Petra Molnar said Canada has used automated decision-making tools since at least 2014 to “triage” immigratio­n and visa applicatio­ns into simple cases and complex ones that require further review by officers due to red flags the machines were trained to look for.

“Algorithms are by no means neutral or objective. It’s a set of instructio­ns and recipes based on previous data analyses that you use to teach the machine to make a decision. It doesn’t think or understand the decision it makes. It’s a rubberstam­ping process,” explained Molnar, a research associate with the Internatio­nal Human Rights Program.

“Biases of the individual­s designing an automated system or selecting the data that trains it can result in discrimina­tory outcomes that are difficult to challenge because they are opaque.”

The study, for instance, looked at an algorithm used in some U.S. courts to assess the risk of reoffendin­g when ordering pretrial detention and found that racialized and vulnerable peo- ple were more likely to be held behind bars than white offenders.

Cynthia Khoo of the Citizen Lab said the increasing use of artificial intelligen­ce or algorithmi­c decision-making tools speaks to the broader trend of “technosolu­tionism” that assumes technology is a panacea to human frailties and errors.

“The problem, however, is that technology — which is made and designed by humans and trained on human decisions and human-produced data — comes with these exact same problems, but wrapped up in a more scientific-looking box, and with the additional problem that not everyone realizes those problems remain,” Khoo said.

The report recommends Ottawa establish an independen­t, arms-length oversight body to review all uses of automated decision systems by the federal government.

It also wants Ottawa to publish all its current and future uses of artificial intelligen­ce and create a task force to better understand the current and prospectiv­e impacts of these technologi­es.

 ?? ROMI LEVINE ?? Petra Molnar, co-author of the U of T study, says Canada has used automated decision-making tools since at least 2014.
ROMI LEVINE Petra Molnar, co-author of the U of T study, says Canada has used automated decision-making tools since at least 2014.
 ?? JENNY KIM OF RYOOKYUNG KIM DESIGN ?? A report suggests Ottawa use an independen­t, arms-length oversight body to review uses of automated decision systems.
JENNY KIM OF RYOOKYUNG KIM DESIGN A report suggests Ottawa use an independen­t, arms-length oversight body to review uses of automated decision systems.

Newspapers in English

Newspapers from Canada