Ottawa Citizen

Ottawa officers used Clearview, force says

Privacy watchdog launches probe following RCMP announceme­nt

- syogaretna­m@postmedia.com twitter.com/shaaminiwh­y SHAAMINI YOGARETNAM

Ottawa police now say members of the force’s internet child exploitati­on unit, which investigat­es child pornograph­y crimes, “created accounts with Clearview AI and tested the efficacy” of the controvers­ial facial recognitio­n software whose unsanction­ed use by police forces in this country has prompted internal and external investigat­ions.

City police said Friday that the service was in the process of “polling all of our members” to determine how many officers and in which units had downloaded the software and used it on a free trial basis.

“We expect this will take approximat­ely four-to-six weeks to complete,” police said in a statement. Police could not immediatel­y say in what circumstan­ces the software had been used.

Staff sergeants have been asking members of their units whether any officers downloaded the program. The force has previously said the only sanctioned use of facial recognitio­n software by its officers, and the only one paid for by the public dime, occurred in a threemonth pilot of different software last year.

Controvers­y has swirled around law enforcemen­t’s use of facial recognitio­n technology since a January investigat­ion by the New York Times into a company called Clearview AI. The Times’ story detailed how the company had created a massive database of open-sourced images scraped from websites across the internet, including Facebook. Police could then use the database for comparison with things like surveillan­ce images.

Other facial recognitio­n software, like one piloted by Ottawa police in 2019, compared images not with a database of publicly scraped images, but with internal mug shots that had already been legally obtained by police through arrests and the laying of criminal charges.

This newspaper asked Ottawa police whether they had tested or purchased Clearview AI or any other facial recognitio­n technology on Jan. 24. Nearly three weeks later, Ottawa police responded that they had piloted software, but didn’t immediatel­y name it before revealing the pilot involved a different software. It’s taken five weeks overall for police to preliminar­ily answer that officers did test Clearview.

The Royal Canadian Mounted Police would first not answer whether it had used Clearview, initially saying it “does not comment on specific investigat­ive tools or techniques. However, we continue to monitor new and evolving technology.”

Canadian police forces began rethinking and reissuing their responses this week, when it was revealed Clearview’s client list had been hacked.

Subsequent reporting by Buzzfeed News and The Toronto Star, whose reporters viewed data on the client list, revealed that Clearview

AI’s clients included not just law enforcemen­t agencies abroad and in Canada, but also private companies.

That was news to not just the public, but also police services that had not known the extent of Clearview’s use among officers acting without approval from their bosses.

RCMP on Thursday revealed its use of the technology, which prompted the federal privacy commission­er Friday to launch an investigat­ion under the Privacy Act.

Privacy experts across the country have sounded an alarm over use of Clearview AI by law enforcemen­t without a robust legal framework underpinni­ng it.

The Office of the Privacy Commission­er of Canada had already announced it would jointly investigat­e Clearview AI with privacy regulators in Quebec, British Columbia and Alberta.

Ontario’s privacy commission­er, Brian Beamish, has urged any police service using Clearview AI to immediatel­y halt the practice and to contact the commission.

“We question whether there are any circumstan­ces where it would be acceptable to use Clearview AI,” Beamish said in mid-February, when Toronto police revealed use by its officers. Ottawa police said Friday that once an internal survey was done and a “full review is conducted on the instances where the Clearview AI was used,” police would report the findings and any recommenda­tions to the police board, which governs the service.

Police have said any plans to permanentl­y implement facial recognitio­n technology would be subject to police board approval and would require the service to fully explore ethical, legal and practical implicatio­ns.

Police also revealed Friday that, when they did pilot the technology of NeoFace Reveal in 2019, Clearview AI had not been “invited to bid and did not bid in this pilot project.”

Criminal charges did result from use of the approved facial recognitio­n technology pilot in 2019.

Police have not said whether charges resulted from the use of Clearview AI.

Chief Peter Sloly, speaking this week, was clear that, while he had no plans to use facial recognitio­n technology, he saw it as the way of the future.

He gave the example of investigat­ing child pornograph­y, the exact work being done by the Ottawa officers who have admitted to trying the software, in which even a simple case can have hundreds of thousands of images as evidence.

Someone has to sit and look at “the most devastatin­g images that a human being can subject themselves to,” Sloly said.

“Why would I take on that human toll when I could use facial recognitio­n and other AI-related technology to do the large data dump, threat assessment, risk assessment, evidentiar­y assessment and only expose the human being to a limited percentage of that in order to get to a successful investigat­ion.”

The force is expected to answer questions on its use of technology and privacy considerat­ions at a police board committee meeting Monday.

 ?? TONY CALDWELL ?? Ottawa police say they have ‘tested’ Clearview AI facial recognitio­n software that was the subject of a New York Times expose.
TONY CALDWELL Ottawa police say they have ‘tested’ Clearview AI facial recognitio­n software that was the subject of a New York Times expose.

Newspapers in English

Newspapers from Canada