National Post

Why police need facial recognitio­n

A COMPLEX SHIFT IN PUBLIC SAFETY METHODS IS CALLING FOR US TO LOOK TO THE FUTURE

- Neil Desai

In light of the recent public discourse over race and policing, some of the world’s largest technology companies — including Microsoft, Amazon and IBM — announced they will no longer provide facial recognitio­n technologi­es to law enforcemen­t agencies.

even before the death of George Floyd and the important discussion over equality in the justice sector that came as a result, police chiefs in Canada were urgently reviewing their use of Clearview AI, a facial recognitio­n tool that canvasses online images for potential matches to suspects or victims’ images. That urgency was driven by media reports speculatin­g that the company’s software may legally infringe on privacy rights. The cause has been taken up by five provincial privacy commission­ers and their federal counterpar­t. Parliament­arians have also committed to studying this issue.

Many police leaders are rightly asking questions about how such technologi­es ended up in their services without going through appropriat­e procuremen­t channels or having clear policies around the use of facial recognitio­n and other artificial intelligen­ce applicatio­ns. yet the presumptio­n by some commentato­rs that our police agencies aren’t interested in privacy rights, similar to the actions being taken by autocratic regimes, is an unfounded leap.

The reality is that our police services are dealing with a complex shift in public safety challenges that have come about in the digital age. At the same time, the critical laws governing investigat­ions still resemble those of the 19th century and the funding models and tools we afford to our police haven’t radically evolved from those used in the 20th century.

Crime statistics in Canada show a decline in traditiona­l crimes such as burglary and vandalism. However, this doesn’t tell the full story: there is immense growth in cybercrime, such as phishing schemes, as well as cyber-enabled crimes, such as human traffickin­g and terrorist incitement.

Such crimes tend to victimize vulnerable population­s at a greater rate. This includes children, seniors, low-income persons, immigrants and those suffering from mental health difficulti­es. As Canadians have seen, certain vulnerable population­s of a specific ethnocultu­ral profile have been victimized at greater rates. This has been most prominentl­y documented through the volume of missing and murdered Indigenous women and girls.

One of the fastest growing area of crime in the digital age is online child sexual exploitati­on. The advent of the Internet has emboldened offenders and is linked to domestic and internatio­nal human traffickin­g. It’s a crime that’s global in nature and affects children as young as newborns. It haunts its victims well beyond the memories of the painful acts, as the content is easily reproduced. And the perpetrato­rs are protected by the security and anonymity afforded by social media sites, dark-net markets, video-streaming services and online payment systems.

Statistics Canada reported a 233 per cent increase in online child sexual exploitati­on cases from 2006 to 2016. The Canadian Centre for Child Protection has detected over 13.5 million images and videos of children being sexually abused online in the past three years.

One of the most overlooked areas of this dark reality are the pains taken by the officers who investigat­e these heinous crimes. They have to look at millions of images, hours after hours of videos of children being sexually abused, to try to help identify victims and suspects. Much of this work is sifting through horrific content that has been previously reviewed by other officers in other jurisdicti­ons to ensure there are no new victims or suspects that can be identified. Many of these officers face severe bouts of post-traumatic stress disorder as a result.

It isn’t a giant leap to believe that with the knowledge of the volume of content officers are exposed to, the risks of severe harm to vulnerable children and the ability for AI applicatio­ns to assist in investigat­ions, Canadians would want our investigat­ors to utilize technologi­es that help identify victims and suspects, while reducing harm to themselves. There may be similar parallels in other emerging areas of crime in the digital age where simply hiring more officers, the most popular strategy of politician­s to address a growing public safety challenge, may not make a dent in the issue.

This isn’t to suggest that police should have unfettered access to technologi­es that bump up against civil liberties or serve visible minorities poorly. There are important questions about the effectiven­ess of these technologi­es, as independen­t studies have shown that facial recognitio­n software producers a greater number of errors when analyzing persons of colour. The greatest contributo­r to this issue is that the training data that the AI algorithms learn from are predominan­tly made up of white males. This embeds a structural bias in the software itself. Technology vendors must prioritize providing tools that serve the whole population, without a wide margin of error based on race, if they want such tools to be seen as trustworth­y.

Police leaders and their oversight bodies must do a better job of highlighti­ng the changing public-safety landscape and how they will balance the rights of all citizens, regardless of their race, while performing their societal obligation to keep communitie­s safe, including online.

They will have to build trust with citizens by being transparen­t about what technologi­cal capabiliti­es they would like to pursue. More importantl­y, they must be upfront about how they will evaluate new technologi­es and what data will power them. They should attempt to identify potential biases that can result from such technology and work to mitigate them. As opposed to an adversaria­l relationsh­ip with privacy advocates, our police agencies should work in concert with them and other stakeholde­rs like victims’ advocates, community organizati­ons committed to racial equality and police unions to find reasonable solutions.

In the absence of a thoughtful policy framework and technologi­es without biases that are informed by the modern challenges of public safety and built on the principles of transparen­cy and trust, there is a serious risk. Reactive policies, such as bans on facial recognitio­n and other AI applicatio­ns for policing, will leave vulnerable population­s like children, visible minorities and others at greater risk than is necessary.

TOOLS GIVEN POLICE HAVEN’T RADICALLY EVOLVED FROM THOSE OF THE 20TH CENTURY.

Neil Desai is an executive with Magnet Forensics and a senior fellow with the Munk School of Global Affairs and Public Policy at the University of Toronto and the Centre for Internatio­nal Governance Innovation. He previously served in senior roles with the Government of Canada.

 ?? DAMIR SAGOLJ / REUTERS FILES ?? Facial recognitio­n software has its ethical boundaries but it is a modern way to enforce the law, Neil Desai writes.
DAMIR SAGOLJ / REUTERS FILES Facial recognitio­n software has its ethical boundaries but it is a modern way to enforce the law, Neil Desai writes.

Newspapers in English

Newspapers from Canada