Mac Format

MACFORMAT INVESTIGAT­ES

Apple’s controvers­ial child protection plans

- WRITTEN BY CHARLOTTE HENRY

Apple was mired in controvers­y when, in August, it announced plans to scan for collection­s of known Child Sexual Abuse Material (CSAM) on iCloud Photos, along with other child safety measures in Messages. Yet weeks later, it changed tack, pausing both programmes to “take additional time to collect input and make improvemen­ts before releasing these critically important child safety features.”

The timing of Apple’s August announceme­nts certainly caused confusion. It was the CSAM scanning that provoked the biggest backlash from privacy advocates, with the American digital rights campaign organisati­on the Electronic Frontier Foundation calling it a “backdoor to your private life.”

What was Apple doing?

The important thing to note was that CSAM detection for iCloud Photos was intended to look for matches to known CSAM images. These matches were to have been acquired and validated as CSAM by a minimum of two child safety organisati­ons, meaning innocent things like a parent taking a funny picture of their baby in the bath would not have been picked up.

The database of CSAM images was never going to be downloaded to a user’s iPhone either. Instead, a database of cryptograp­hic hashes based on those images was. Apple called this system NeuralHash.

The aim was to check whether on-device images matched known CSAM. If collection­s of known CSAM images were found, Apple would have been alerted. When that happened, the company would have conducted a human review – so an actual person would have checked to see if there had not been an error. If a match was found to be correct, Apple would then have filed a report with The National Center for Missing & Exploited Children (NCMEC), an American non-profit.

In a support document, Apple said “in a case where the system identifies photos that do not match known CSAM images, the account would not be disabled, and no report would be filed to NCMEC.” The company also said that it would not add to the database of known CSAM hashes, but that it was “obligated to report any instances we learn of to the appropriat­e authoritie­s.”

This was only going to happen to photos being uploaded to iCloud Photos, not the private on-device iPhone photo library, nor anywhere else you might store images on a device. Turning off iCloud Photos deactivate­d the process.

Why the controvers­y?

One might think that stopping the sharing of child abuse images is a worthy aim. And it obviously is. The row was about whether the end justified the means, and whether there was a

slippery slope into Apple scanning for other material, illegal or otherwise. Furthermor­e, the idea that Apple, or any tech company, was scanning images in any way is, ultimately, an invasion of privacy.

While other tech companies have had similar systems for a while, Apple’s plans were particular­ly controvers­ial. In recent years the company has ramped up its rhetoric around privacy, not least with its “Privacy, That’s iPhone” advertisin­g campaign.

How does Apple justify it?

Apple has long insisted that on-device processing preserves user privacy better than server-side processing. Erik Neuenschwa­nder, head of Privacy at Apple, told TechCrunch that the on-device system being introduced was “really the alternativ­e to where users’ libraries have to be processed on a server that is less private.

“The thing that we can say with this system is that it leaves privacy completely undisturbe­d for every other user who’s not into this illegal behaviour,” he added.

Neuenschwa­nder also explained why there was a threshold for Apple issuing a report – the system is supposed to be triggered by a collection of known CSAM, not an individual image. Surely though, having just one such image is one too many? “We want to ensure that the reports that we make are high-value and actionable, and one of the notions of all systems is that there’s some uncertaint­y built in to whether or not that image matched,” he argued. “The threshold allows us to reach that point where we expect a false reporting rate…

of one in 1 trillion accounts per year.” Apple’s privacy chief also insisted that the structure of the system meant it would be almost impossible for law enforcemen­t or other agencies to demand Apple scan for material beyond known illegal CSAM.

Does it matter in Europe?

Despite all the fuss, the CSAM scanning proposals were only set to take place in the

US. However, it is hard to imagine that once the system got rolled out Stateside that it would not have been deployed elsewhere.

No surprise then that the backlash beyond the US began quickly. In the UK, Heather Burns, Policy Manager at the Open Rights Group, told MacFormat:

“The threat of Apple’s CSAM scanning system opening the gates to the scanning and monitoring of our private conversati­ons, for subjective purposes, is not theoretica­l. The UK’s upcoming Online Safety Bill outwardly aims to remove encryption from our private messages, and to oblige service providers to detect, intercept, and remove a range of both illegal and legal content and behaviour from them, under the threats of service restrictio­ns, penalties, and even criminal charges for company employees if they fail to do so. So it is a matter of not if, but when the UK government will order Apple to expand its scanning system from CSAM to our private behaviours and personal speech, and Apple will have no choice but to comply.”

German journalist­s also raised concerns, saying that the moves were a “violation of the freedom of the press.” German politician­s followed up shortly after, writing to Apple CEO Tim Cook. The country’s Digital Agenda committee chairman Manuel Höferlin pulled no punches in the letter, calling the moves the “biggest breach of the dam for the confidenti­ality of communicat­ion that we have seen since the invention of the internet.”

What about Messages?

New child protection measures were also announced for Messages in iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. Called Communicat­ion Safety in Messages, they applied only to iCloud Family accounts. If a child had received a sexually explicit image the photo would have been blurred, while the child would have been sent a warning accompanie­d by resources and assurances that it was okay to not view the photo. The system also allowed parents of children 12 and under to get a message if the child viewed the image and to be warned if a child attempted to send explicit photos.

Although separate to CSAM scanning, the new protection­s would also have used on-device machine learning to analyse image attachment­s and determine whether a photo was explicit. However, at the time of writing, this programme is also now on pause.

 ??  ??
 ??  ?? New Communicat­ion Safety features in Messages provide extra protection on children’s accounts.
If a child is sent a sexually explicit image, it is greyed out and must be clicked to display.
If clicked, Messages will warn children that the image could be upsetting or inappropri­ate.
New Communicat­ion Safety features in Messages provide extra protection on children’s accounts. If a child is sent a sexually explicit image, it is greyed out and must be clicked to display. If clicked, Messages will warn children that the image could be upsetting or inappropri­ate.
 ??  ?? Updates to Siri in iOS 15, iPadOS 15, watchOS 8 and macOS Monterey will allow users to file reports about CSAM.
Updates to Siri in iOS 15, iPadOS 15, watchOS 8 and macOS Monterey will allow users to file reports about CSAM.
 ??  ?? For children aged up to 12, if the image is displayed then their parents will be notified.
For children aged up to 12, if the image is displayed then their parents will be notified.
 ??  ?? Search is also updated, offering support and resources to anyone attempting to search for anything CSAM related.
Search is also updated, offering support and resources to anyone attempting to search for anything CSAM related.

Newspapers in English

Newspapers from Australia