Mac|Life

Privacy advocates protest Apple measures

Do Apple's new child protection measures go too far for user privacy?

- BY ALEX SUMMERSBY

APPLE HAS ANNOUNCED three new child protection features, coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. However, objectors say the measures breach user privacy and open a door for unregulate­d surveillan­ce.

The first feature is aimed at children under the age of 13 in an iCloud Family group: The Messages app will be able to warn them when they receive or send sexually explicit photos, and, if they choose to go ahead and view or send, alert their parent. The feature uses on–device machine learning to analyze image attachment­s and determine if a photo is sexually explicit. Apple does not gain access to any of the messages. The feature requires opt–in, and applies only to accounts set up as families in iCloud.

Second, Siri and Search will provide expanded informatio­n and help when, for example, users ask how they can report Child Sexual Abuse Material (CSAM) or child exploitati­on. Siri and Search will also intervene if users try to search for CSAM–related topics, explaining that such topics are illegal and harmful, and provide resources for where to get help with this issue.

Third, to combat the spread of CSAM, iOS and iPadOS will gain the ability to detect known CSAM images stored in iCloud Photos. This takes place entirely on–device before the photos are uploaded, but if more than a certain (unspecifie­d) threshold of known CSAM content is detected, the uploaded images become available to Apple, who will then manually review each report to confirm there is a match. If there is, Apple will disable the user account and send a report to the National Center for Missing and Exploited Children (NCMEC). Users can appeal

if they feel their account has been flagged in error.

Privacy advocates have strongly objected to this measure. An open letter at applepriva­cyletter.com says: “Because both [the iCloud Photos and Messages] checks are performed on the user’s device, they have the potential to bypass any end–to–end encryption that would otherwise safeguard the user’s privacy.” Once the technology is in place, it argues, there is no technical barrier — and no accountabi­lity — to limit it to CSAM content. The Electronic Frontier Foundation (EFF) warns for example that authoritar­ian regimes could require the technology to check for dissent.

As AppleInsid­er.com points out, however, all photo sharing services scan uploaded images. Google has scanned Gmail inboxes for known CSAM since 2008. Facebook has used Microsoft’s PhotoDNA system since 2011, and Twitter since 2013. Social media providers are increasing­ly being expected to scrutinize content posted on their platforms, and services such as Google routinely mine your data and utilize your searches for ad targeting.

Apple has emphasized its ongoing commitment to user privacy. The new system does not actually scan your photos, but performs secure on–device matching of known CSAM image hashes (digital signatures), which generates an encrypted safety voucher uploaded with the image. Even these safety vouchers cannot be read by Apple unless the iCloud Photos account crosses a threshold of known CSAM content.

You can find more, including detailed tech explainers, at www. apple.com/child-safety.

 ??  ?? Apple’s diagram of how it will detect known CSAM images being uploaded to iCloud Photos.
Apple’s diagram of how it will detect known CSAM images being uploaded to iCloud Photos.
 ??  ?? Messages will alert children in an iCloud Family if they receive or send sexually explicit images.
Messages will alert children in an iCloud Family if they receive or send sexually explicit images.
 ??  ?? If you ask Siri or Search for CSAM content, Siri or Search will intervene and offer help instead.
If you ask Siri or Search for CSAM content, Siri or Search will intervene and offer help instead.

Newspapers in English

Newspapers from Australia