Apple Will Start Scanning All Devices for Child Abuse Content - Experts Worry About Personal Privacy
Table of Contents
- By Dawna M. Roberts
- Published: Aug 16, 2021
- Last Updated: Mar 18, 2022
Apple is very vocal about its commitment to users' privacy and security while using their devices, so the news that Apple will begin scanning all devices for child abuse content raises concerns over user privacy protection.
Apple vs. Privacy
Apple has always been a staunch supporter of personal privacy, as evidenced through their amusing ads and plethora of privacy and security settings on every device. However, on Tuesday, the tech giant announced that they would be launching a new feature across all platforms to limit Child Sexual Abuse Material (CSAM) in the U.S.
Additionally, Apple says they have plans to add responses to Siri and Search when someone attempts to search for materials related to CSAM with a warning that "interest in this topic is harmful and problematic."
Apple is adding a related feature called Communication Safety. In its public announcement, the company said, "Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit." Parents must enable this feature through the family sharing app, and Apple will not have access to the actual messages.
The changes will affect all versions of iOS, iPadOS, watchOS, and macOS.
How it Works
The system works by comparing on-device images with a massive database of known CSAM images. According to The Hacker News, the images in the database are "provided by the National Center for Missing and Exploited Children (NCMEC) and other child safety organizations before the photos are uploaded to the cloud.”
The cryptographic system uses a "private set intersection" called "NeuralHash" but it only works if the user has iCloud photo sharing turned on.
Additionally, Apple will employ "another cryptographic principle called threshold secret sharing that allows it to "interpret" the contents if an iCloud Photos account crosses a threshold of known child abuse imagery, following which the content is manually reviewed to confirm there is a match, and if so, disable the user's account, report the material to NCMEC, and pass it on to law enforcement."
Privacy Concerns
Security experts are concerned about how this new feature will affect individual privacy and how it could lead to "mission creep" and potentially implicate innocent people. A big question is who monitors this feature to ensure it works as intended and does not cross the line into something sinister?
The Hacker News puts it best 'U.S. whistle-blower Edward Snowden tweeted that, despite the project's good intentions, what Apple is rolling out is "mass surveillance," while Johns Hopkins University cryptography professor and security expert Matthew Green said, "the problem is that encryption is a powerful tool that provides privacy, and you can't really have strong privacy while also surveilling every image anyone sends."'
Most users are probably unaware, but Apple, Google, Twitter, Facebook, Microsoft, and Dropbox already scan all email images for potentially harmful material. However, this new initiative of Apple's has security professionals worried that this practice will be taken too far.
The Hacker News reported that, "The New York Times, in a 2019 investigation, revealed that a record 45 million online photos and videos of children being sexually abused were reported in 2018, out of which Facebook Messenger accounted for nearly two-thirds, with Facebook as a whole responsible for 90% of the reports."
The Electronic Frontier Foundation (EFF) responded to Apple's announcement with one of their own saying, "All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change."