MACFORMAT INVESTIGATES
Apple’s controversial child protection plans
Apple was mired in controversy when, in August, it announced plans to scan for collections of known Child Sexual Abuse Material (CSAM) on iCloud Photos, along with other child safety measures in Messages. Yet weeks later, it changed tack, pausing both programmes to “take additional time to collect input and make improvements before releasing these critically important child safety features.”
The timing of Apple’s August announcements certainly caused confusion. It was the CSAM scanning that provoked the biggest backlash from privacy advocates, with the American digital rights campaign organisation the Electronic Frontier Foundation calling it a “backdoor to your private life.”
What was Apple doing?
The important thing to note was that CSAM detection for iCloud Photos was intended to look for matches to known CSAM images. These matches were to have been acquired and validated as CSAM by a minimum of two child safety organisations, meaning innocent things like a parent taking a funny picture of their baby in the bath would not have been picked up.
The database of CSAM images was never going to be downloaded to a user’s iPhone either. Instead, a database of cryptographic hashes based on those images was. Apple called this system NeuralHash.
The aim was to check whether on-device images matched known CSAM. If collections of known CSAM images were found, Apple would have been alerted. When that happened, the company would have conducted a human review – so an actual person would have checked to see if there had not been an error. If a match was found to be correct, Apple would then have filed a report with The National Center for Missing & Exploited Children (NCMEC), an American non-profit.
In a support document, Apple said “in a case where the system identifies photos that do not match known CSAM images, the account would not be disabled, and no report would be filed to NCMEC.” The company also said that it would not add to the database of known CSAM hashes, but that it was “obligated to report any instances we learn of to the appropriate authorities.”
This was only going to happen to photos being uploaded to iCloud Photos, not the private on-device iPhone photo library, nor anywhere else you might store images on a device. Turning off iCloud Photos deactivated the process.
Why the controversy?
One might think that stopping the sharing of child abuse images is a worthy aim. And it obviously is. The row was about whether the end justified the means, and whether there was a
slippery slope into Apple scanning for other material, illegal or otherwise. Furthermore, the idea that Apple, or any tech company, was scanning images in any way is, ultimately, an invasion of privacy.
While other tech companies have had similar systems for a while, Apple’s plans were particularly controversial. In recent years the company has ramped up its rhetoric around privacy, not least with its “Privacy, That’s iPhone” advertising campaign.
How does Apple justify it?
Apple has long insisted that on-device processing preserves user privacy better than server-side processing. Erik Neuenschwander, head of Privacy at Apple, told TechCrunch that the on-device system being introduced was “really the alternative to where users’ libraries have to be processed on a server that is less private.
“The thing that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behaviour,” he added.
Neuenschwander also explained why there was a threshold for Apple issuing a report – the system is supposed to be triggered by a collection of known CSAM, not an individual image. Surely though, having just one such image is one too many? “We want to ensure that the reports that we make are high-value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched,” he argued. “The threshold allows us to reach that point where we expect a false reporting rate…
of one in 1 trillion accounts per year.” Apple’s privacy chief also insisted that the structure of the system meant it would be almost impossible for law enforcement or other agencies to demand Apple scan for material beyond known illegal CSAM.
Does it matter in Europe?
Despite all the fuss, the CSAM scanning proposals were only set to take place in the
US. However, it is hard to imagine that once the system got rolled out Stateside that it would not have been deployed elsewhere.
No surprise then that the backlash beyond the US began quickly. In the UK, Heather Burns, Policy Manager at the Open Rights Group, told MacFormat:
“The threat of Apple’s CSAM scanning system opening the gates to the scanning and monitoring of our private conversations, for subjective purposes, is not theoretical. The UK’s upcoming Online Safety Bill outwardly aims to remove encryption from our private messages, and to oblige service providers to detect, intercept, and remove a range of both illegal and legal content and behaviour from them, under the threats of service restrictions, penalties, and even criminal charges for company employees if they fail to do so. So it is a matter of not if, but when the UK government will order Apple to expand its scanning system from CSAM to our private behaviours and personal speech, and Apple will have no choice but to comply.”
German journalists also raised concerns, saying that the moves were a “violation of the freedom of the press.” German politicians followed up shortly after, writing to Apple CEO Tim Cook. The country’s Digital Agenda committee chairman Manuel Höferlin pulled no punches in the letter, calling the moves the “biggest breach of the dam for the confidentiality of communication that we have seen since the invention of the internet.”
What about Messages?
New child protection measures were also announced for Messages in iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. Called Communication Safety in Messages, they applied only to iCloud Family accounts. If a child had received a sexually explicit image the photo would have been blurred, while the child would have been sent a warning accompanied by resources and assurances that it was okay to not view the photo. The system also allowed parents of children 12 and under to get a message if the child viewed the image and to be warned if a child attempted to send explicit photos.
Although separate to CSAM scanning, the new protections would also have used on-device machine learning to analyse image attachments and determine whether a photo was explicit. However, at the time of writing, this programme is also now on pause.