National Post (National Edition)

Should machines screen would-be immigrants?

- Colby Cosh

The policy nerds, God bless them, have got their teeth into some fresh red meat. In April several agencies of the federal government issued a Request for Informatio­n to the software business, seeking feedback on the possible use of “AI/ML” in immigratio­n screening and litigation. “AI/ML” stands for “artificial intelligen­ce/machine learning.” The RFI suggested that AI applicatio­ns for immigratio­n could include the sorting of last-ditch “humanitari­an and compassion­ate grounds” applicatio­ns for Canadian residency and even-more-last-ditch “pre-removal risk assessment­s” available for refugees whose claims have failed.

This has attracted the attention of AI specialist­s, security analysts, and lawyers, including the interdisci­plinary “Citizen Lab” at the University of Toronto’s Munk School of Public Affairs. The federal public service is working out the principles involved in using artificial intelligen­ce at various levels of decision-making, and no doubt some of you are envisionin­g a robot angel with a flaming sword or perhaps a light sabre, turning aside hordes of the poor and miserable at Canada’s gates.

“Artificial intelligen­ce” and “machine learning” are tricky, subtle terms. You should not be imagining an autonomous, disembodie­d HAL-9000 human-type mind when you see them. You should probably be imagining something more like your credit score.

“AI/ML” is a catch-all term applied more and more often to automated procedures for applying statistics to data. The terms do not necessaril­y imply enormous mathematic­al complexity, and still less do they denote godlike inerrancy. They simply indicate a lack of costly human supervisio­n in the creation of statistica­l tests and scores. Some algorithm “looks at” a field of training data associated with outcomes, and generates optimum prediction rules. The resulting rules themselves might be pretty simple. Anybody who has had advertisem­ents or shopping site recommenda­tions served to him on the World Wide Web already knows that they can be simple to the point of sheer cretinism.

The federal government has made pious noises about keeping humans in the decision-making loop when it comes to immigratio­n applicatio­ns. But any introducti­on of “machine learning” to the overall process is bound to awaken protective instincts in the academy and the world of thinktanks. A “risk score” for refugees would have all the same obvious problems that credit scores sometimes do — and then some.

A predictive score validated on old “training” data is necessaril­y backward-looking. No such score can be better than the measure used to define validity in the first place: that’s a part of the job the computer can’t handle for you. And automated decisions about the importance of different variables can sometimes be nonsensica­l. “Machine learning” procedures don’t offer handy narrative explanatio­ns concerning their inner workings, and no one is ever totally comfortabl­e with an impenetrab­le “black box” deciding some aspect of his fate.

But, then again, it is not as though human decision-making doesn’t have all of the same problems in different forms. A choicemaki­ng human can be influenced mistakenly by his experience­s, can form nonsensica­l prejudices, or can obscure or conceal the actual logic of a decision by means of contrived pseudo-reasoning. The arguments likely to emerge over AI/ML in government are likely to resemble the ones over standardiz­ed testing in education, and will feature all the same indignatio­n and fear.

It is hard to derive an overall lesson from the history of artificial intelligen­ce. One is tempted to believe that humans and machines are necessaril­y better working together, as they seem to be when it comes to chess-playing. On the other hand, the history of diagnostic scoring in medicine suggests that human stubbornne­ss about giving way to an algorithm (or a meta-algorithm that makes algorithms) can be harmful. Unless you give the brute statistica­l rule some sort of inherent priority or weight, prideful apes may ignore it and carry on with their crazy, idiosyncra­tic ape decisions as before.

None of this will be easy, and what critics rightly fear is that the incentives are tipped heavily to the side of the machines. A Request for Informatio­n is, by definition, an invitation to salesmen: people in the artificial intelligen­ce business are going to make the wildest possible claims for artificial intelligen­ce. And computer cycles are cheaper than human labour, making it more likely that problems with the former may be ignored by budget-conscious bureaucrat­s.

We all, citizens or not, do have to live with the existence of credit scoring, medical diagnostic rules, and other algorithmi­c judges that exercise terrible power. No one seems to believe that algorithms can be excluded from immigratio­n altogether. But even the critics don’t necessaril­y agree on the guiding principles. The Citizen Lab, for example, suggests that all source code for government decision-making systems should be exposed to the public by default. As two other specialist­s pointed out in an earlier Policy Options paper, however, this would allow for unscrupulo­us reverse-engineerin­g of the algorithms. There would, almost immediatel­y, be a market for knowledge of how to game the computer-augmented system.

Then again, this merely reminds us that the planet Earth is already freighted with several million tons of expensive profession­al consultant­s on Canadian immigratio­n. It would not be logical to be too afraid of a form of reverse engineerin­g that already happens on an enormous scale in the “human” part of the system.

 ?? JACK BOLAND / POSTMEDIA NEWS FILES ?? The federal government has issued a Request for Informatio­n on the possible use of AI in immigratio­n screening, Colby Cosh writes.
JACK BOLAND / POSTMEDIA NEWS FILES The federal government has issued a Request for Informatio­n on the possible use of AI in immigratio­n screening, Colby Cosh writes.
 ??  ??

Newspapers in English

Newspapers from Canada