National Post (National Edition)
Should machines screen would-be immigrants?
The policy nerds, God bless them, have got their teeth into some fresh red meat. In April several agencies of the federal government issued a Request for Information to the software business, seeking feedback on the possible use of “AI/ML” in immigration screening and litigation. “AI/ML” stands for “artificial intelligence/machine learning.” The RFI suggested that AI applications for immigration could include the sorting of last-ditch “humanitarian and compassionate grounds” applications for Canadian residency and even-more-last-ditch “pre-removal risk assessments” available for refugees whose claims have failed.
This has attracted the attention of AI specialists, security analysts, and lawyers, including the interdisciplinary “Citizen Lab” at the University of Toronto’s Munk School of Public Affairs. The federal public service is working out the principles involved in using artificial intelligence at various levels of decision-making, and no doubt some of you are envisioning a robot angel with a flaming sword or perhaps a light sabre, turning aside hordes of the poor and miserable at Canada’s gates.
“Artificial intelligence” and “machine learning” are tricky, subtle terms. You should not be imagining an autonomous, disembodied HAL-9000 human-type mind when you see them. You should probably be imagining something more like your credit score.
“AI/ML” is a catch-all term applied more and more often to automated procedures for applying statistics to data. The terms do not necessarily imply enormous mathematical complexity, and still less do they denote godlike inerrancy. They simply indicate a lack of costly human supervision in the creation of statistical tests and scores. Some algorithm “looks at” a field of training data associated with outcomes, and generates optimum prediction rules. The resulting rules themselves might be pretty simple. Anybody who has had advertisements or shopping site recommendations served to him on the World Wide Web already knows that they can be simple to the point of sheer cretinism.
The federal government has made pious noises about keeping humans in the decision-making loop when it comes to immigration applications. But any introduction of “machine learning” to the overall process is bound to awaken protective instincts in the academy and the world of thinktanks. A “risk score” for refugees would have all the same obvious problems that credit scores sometimes do — and then some.
A predictive score validated on old “training” data is necessarily backward-looking. No such score can be better than the measure used to define validity in the first place: that’s a part of the job the computer can’t handle for you. And automated decisions about the importance of different variables can sometimes be nonsensical. “Machine learning” procedures don’t offer handy narrative explanations concerning their inner workings, and no one is ever totally comfortable with an impenetrable “black box” deciding some aspect of his fate.
But, then again, it is not as though human decision-making doesn’t have all of the same problems in different forms. A choicemaking human can be influenced mistakenly by his experiences, can form nonsensical prejudices, or can obscure or conceal the actual logic of a decision by means of contrived pseudo-reasoning. The arguments likely to emerge over AI/ML in government are likely to resemble the ones over standardized testing in education, and will feature all the same indignation and fear.
It is hard to derive an overall lesson from the history of artificial intelligence. One is tempted to believe that humans and machines are necessarily better working together, as they seem to be when it comes to chess-playing. On the other hand, the history of diagnostic scoring in medicine suggests that human stubbornness about giving way to an algorithm (or a meta-algorithm that makes algorithms) can be harmful. Unless you give the brute statistical rule some sort of inherent priority or weight, prideful apes may ignore it and carry on with their crazy, idiosyncratic ape decisions as before.
None of this will be easy, and what critics rightly fear is that the incentives are tipped heavily to the side of the machines. A Request for Information is, by definition, an invitation to salesmen: people in the artificial intelligence business are going to make the wildest possible claims for artificial intelligence. And computer cycles are cheaper than human labour, making it more likely that problems with the former may be ignored by budget-conscious bureaucrats.
We all, citizens or not, do have to live with the existence of credit scoring, medical diagnostic rules, and other algorithmic judges that exercise terrible power. No one seems to believe that algorithms can be excluded from immigration altogether. But even the critics don’t necessarily agree on the guiding principles. The Citizen Lab, for example, suggests that all source code for government decision-making systems should be exposed to the public by default. As two other specialists pointed out in an earlier Policy Options paper, however, this would allow for unscrupulous reverse-engineering of the algorithms. There would, almost immediately, be a market for knowledge of how to game the computer-augmented system.
Then again, this merely reminds us that the planet Earth is already freighted with several million tons of expensive professional consultants on Canadian immigration. It would not be logical to be too afraid of a form of reverse engineering that already happens on an enormous scale in the “human” part of the system.