Pittsburgh Post-Gazette

We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous

- By Jonathan Mayer and Anunay Kulshresth­a

Earlier this month, Apple unveiled a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announceme­nt sparked a civil liberties firestorm, and Apple’s own employees have been expressing alarm. The company insists reservatio­ns about the system are rooted in “misunderst­andings.” We disagree.

We wrote the only peer-reviewed publicatio­n on how to build a system like Apple’s — and we concluded the technology was dangerous. We’re not concerned because we misunderst­and how Apple’s system works. The problem is, we understand exactly how it works.

Our research project began two years ago, as an experiment­al system to identify CSAM in end-to-end-encrypted online services. As security researcher­s, we know the value of end-to-end encryption, which protects data from third-party access. But we’re also horrified that CSAM is proliferat­ing on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightfo­rward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocentco­ntent, the service would learn nothing. People couldn’t read the database or learn whether content matched, since that informatio­n could reveal law enforcemen­t methods and help criminals evade detection.

Knowledgea­ble observers argued a system like ours was far from feasible. After many falsestart­s, we built a working prototype. But we encountere­da glaring problem.

Our system could be easily repurposed for surveillan­ce and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetic­al: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require prescreeni­ng content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.

We spotted other shortcomin­gs. The contentmat­ching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.

We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We’d planned to discuss paths forward at an academic conference this month.

That dialogue never happened. The week before our presentati­on, Apple announced it would deploy its nearly identical system on iCloud Photos, which exists on more than 1.5 billion devices. Apple’s motivation, like ours, was to protect children. And its system was technicall­y more efficient and capable than ours. But we were baffled to see that Apple had few answers for the hard questions we’d surfaced.

China is Apple’s secondlarg­est market, with probably hundreds of millions of devices. What stops the Chinese government from demanding Apple scan those devices for pro-democracy materials? Absolutely nothing, except Apple’s solemn promise. This is the same Apple that blocked Chinese citizens from apps that allow access to censored material, that acceded to China’s demand to store user data in state-owned data centers and whose chief executive infamously declared, “We follow the law wherever we do business.”

Apple’s muted response about possible misuse is especially puzzling because it’s a high-profile flip-flop. After the 2015 terrorist attack in San Bernardino,

Calif., the Justice Department tried to compel Apple to facilitate access to a perpetrato­r’s encrypted iPhone. Apple refused, swearing in court filings that if it were to build such a capability once, all bets were off about how that capability might be used in future.

“It’s something we believe is too dangerous to do,” Apple explained. “The only way to guarantee that such a powerful tool isn’t abused ... is to never create it.” That worry is just as applicable to Apple’s new system.

Apple has also dodged on the problems of false positives and malicious gaming, sharing few details about how its content matching works.

The company’s latest defense of its system is that there are technical safeguards against misuse, which outsiders can independen­tly audit. But Apple has a record of obstructin­g security research. And its vague proposal for verifying the content-matching database would flunk an introducto­ry securityco­urse.

Apple could implement stronger technical protection­s, providing public proof that its content-matching database originated with childsafet­y groups. We’ve already designed a protocol it could deploy. Our conclusion, though, is that many downside risks probably don’t havetechni­cal solutions.

Apple is making a bet that it can limit its system to certain content in certain countries, despite immense government pressures. We hope it succeeds in both protecting children and affirming incentives for broader adoption of encryption. But make no mistake that Apple is gambling with security, privacy andfree speech worldwide. Jonathan Mayer is an assistant professor of computer science and public affairs at Princeton University. He previously served as technology counsel to then-Sen. Kamala Harris and as chief technologi­st of the Federal Communicat­ions Commission Enforcemen­t Bureau. Anunay Kulshresth­a is a graduate researcher at the Princeton University Center for Informatio­n Technology Policy and a PhD candidate in the department of computer science.

Newspapers in English

Newspapers from United States