Reprinted from TidBITS by permission; reuse governed by Creative Commons license BY-NC-ND 3.0. TidBITS has offered years of thoughtful commentary on Apple and Internet topics. For free email subscriptions and access to the entire TidBITS archive, visit http://www.tidbits.com/ New CSAM Detection Details Emerge Following Craig Federighi Interview Adam Engst It isn't often Apple admits it messed up, much less on an announcement, but there was a lot of mea culpa in this [1]interview of Apple software chief Craig Federighi by the Wall Street Journal's Joanna Stern. The interview followed the company's first explanations a week ago of how it would work to curb the spread of known child sexual abuse material (CSAM) and separately reduce the potential for minors to be exposed to sexual images in Messages (see '[2]FAQ about Apple's Expanded Protections for Children,' 7 August 2021). Apple's false-footed path started with [3]a confusing page on its website that conflated the two unrelated protections for children. It then released [4]a follow-up FAQ that failed to answer many frequently asked questions, held [5]media briefings that may (or may not?) have added new information, trotted out [6]lower-level executives for interviews, and ended up offering Federighi for the Wall Street Journal interview. After CEO Tim Cook, Federighi is easily Apple's second-most recognized executive, showing how seriously the company has been forced to work to get its story straight. Following the interview's release, Apple posted yet another explanatory document, '[7]Security Threat Model Review of Apple's Child Safety Features.' And Bloomberg reports that [8]Apple has warned staff to be ready for questions. Talk about a PR train wreck. (Our apologies for the out-of-context nature of the rest of this article if you're coming in late. There's just too much to recap, so be sure to read Apple's previously published materials and our coverage linked above first.) In the Wall Street Journal interview, Stern extracted substantially more detail from Federighi about what Apple is and isn't scanning and how CSAM will be recognized and reported. She also interspersed even better clarifications of the two unrelated technologies Apple announced at the same time: CSAM Detection and Communications Safety in Messages. The primary revelation from Federighi is that Apple built 'multiple levels of auditability' into the CSAM detection. He told Stern: We ship the same software in China with the same database as we ship in America, as we ship in Europe. If someone were to come to Apple [with a request to extend the scanning], Apple would say no. But let's say you aren't confident. You don't want to just rely on Apple saying no. You want to be sure that Apple couldn't get away with it if we said yes. Well, that was the bar we set for ourselves, in releasing this kind of system. There are multiple levels of auditability, and so we're making sure that you don't have to trust any one entity, or even any one country, as far as what images are part of this process. This was the first time that Apple mentioned auditability within the CSAM detection system, much less multiple levels of it. Federighi also revealed that 30 images must be matched during upload to iCloud Photos before Apple can decrypt the matching images through the corresponding 'safety vouchers.' Most people probably also didn't realize that Apple ships the same version of each of its operating systems across every market. But that's all that was said about auditability in the video interview. Apple followed the interview with the release of another document, the '[9]Security Threat Model Review of Apple's Child Safety Features,' which is clearly what Federighi had in mind when referring to multiple levels of auditability. It provides all sorts of new information on both architecture and auditability. While this latest document better explains the CSAM detection system in general, we suspect that Apple also added some details to the system in response to the firestorm of controversy. Would Apple otherwise have published the necessary information for users'or security researchers'to verify that the on-device database of CSAM hashes was intact? Would there have been any discussion of third-party auditing of the system on Apple's campus? Regardless, here is the new information that struck us as most important: * The on-device CSAM hash database is actually generated from the intersection of at least two databases of known illegal CSAM images from child safety organizations not under the jurisdiction of the same government. It initially appeared'and Apple's comments indicated'that it would use only the National Center for Missing and Exploited Children (NCMEC) database of hashes. Only CSAM image hashes that exist in both databases are included. Even if non-CSAM images were somehow added to the NCMEC CSAM database or other, hitherto unknown CSAM databases, through error or coercion, it's implausible that all could be exploited in the same way. * Because Apple distributes the same version of each of its operating systems globally and the encrypted CSAM hash database is bundled rather than being downloaded or updated over the Internet, Apple claims that security researchers will be able to inspect every release. We might speculate that Apple [10]dropped a lawsuit against security firm Corellium over its software (which allows security experts to run virtualized iOS devices for research purposes) to add credibility to its claim of the ready availability of outside inspection. * Apple says it will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Researchers (and average users) will be able to compare the root hash of the encrypted database present on their device to the expected root hash in the Knowledge Base article. Again, Apple suggests that security researchers will be able to verify this system. We believe this is the case based on how Apple uses cryptography to protect its operating systems against modification. * This hashing of the database approach also enables third-party auditing. Apple says it can'in a secure on-campus environment'provide an auditor technical proof that the intersection of hashes and blinding were performed correctly. The suggestion is that participating child safety organizations might wish to perform such an audit. * NeuralHash doesn't rely on machine-learning classification, the way Photos can identify pictures of cats, for instance. Instead, NeuralHash is purely an algorithm designed to validate that one image is the same as another, even if one of the images has been altered in certain ways, like resizing, cropping, and recoloring. In Apple's tests against 100 million non-CSAM images, it encountered 3 false positives when compared against NCMEC's database. In a separate test of 500,000 adult pornography images matched against NCMEC's database, it found no false positives. * As revealed in the interview, Apple's initial match threshold is expected to be 30 images. That means that someone would have to have at least that many images of known illegal CSAM being uploaded to iCloud Photos before Apple's system would even know any matches took place. At that point, human reviewers would be notified to review the low-resolution previews bundled with the matches that can be decrypted since the threshold was exceeded. That threshold should ensure that even an extremely unlikely false positive has no ill effect. * Since Apple's reviewers aren't legally allowed to view the original databases of known CSAM, all they can do is confirm is that decrypted preview images appear to be CSAM, not that they match known CSAM. (One expects the images to be detailed enough to recognize human nudity without identifying individuals.) If a reviewer thinks the images are CSAM, Apple suspends the account and hands the entire matter off to NCMEC, which performs the actual comparison and can bring in law enforcement. * Throughout the document, Apple repeatedly uses the phrase 'is subject to code inspection by security researchers like all other iOS device-side security claims.' That's not auditing per se, but it indicates that Apple knows that security researchers try to confirm its security claims and is encouraging them to dig into these particular areas. It would be a huge reputational and financial win for a researcher to identify a vulnerability in the CSAM detection system, so Apple is likely correct in suggesting that its operating system releases will be subject to even more scrutiny than before. I'm perplexed by just how thoroughly Apple botched this announcement. The initial materials raised too many questions that lacked satisfactory answers for both technical and non-technical users, even after multiple subsequent interviews and documents. It seems that Apple felt that by saying anything at all, we'd pat them on the back and thank them for being so transparent. After all, other cloud-based photo storage providers are already scanning all uploaded photos for CSAM without telling their users'Facebook filed over 20 million CSAM reports with NCMEC in 2020 alone. But Apple badly underestimated the extent to which implying 'Trust us' didn't mesh with 'What happens on your iPhone, stays on your iPhone.' It now appears that Apple is asking us instead to 'Trust, but verify' ([11]a phrase with a fascinating history originating as a paraphrase of Vladimir Lenin and Joseph Stalin before being popularized in English by Ronald Reagan). We'll see how security and privacy experts respond to these new revelations, but at least Apple now seems to be trying harder to share all the relevant details. References Visible links 1. https://www.wsj.com/video/series/joanna-stern-personal-technology/apples-software-chief-explains-misunderstood-iphone-child-protection-features-exclusive/573D76B3-5ACF-4C87-ACE1-E99CECEFA82C 2. https://tidbits.com/2021/08/07/faq-about-apples-expanded-protections-for-children/ 3. https://www.apple.com/child-safety/ 4. https://www.apple.com/child-safety/pdf/Expanded_Protections_for_Children_Frequently_Asked_Questions.pdf 5. https://www.imore.com/apple-confirms-csam-checks-will-be-carried-out-images-already-icloud 6. https://techcrunch.com/2021/08/10/interview-apples-head-of-privacy-details-child-abuse-detection-and-messages-safety-features/ 7. https://www.apple.com/child-safety/pdf/Security_Threat_Model_Review_of_Apple_Child_Safety_Features.pdf 8. https://www.bloomberg.com/news/articles/2021-08-13/apple-warns-staff-to-be-ready-for-questions-on-child-porn-issue 9. https://www.apple.com/child-safety/pdf/Security_Threat_Model_Review_of_Apple_Child_Safety_Features.pdf 10. https://9to5mac.com/2021/08/10/apple-drops-copyright-lawsuit-against-corellium-for-selling-virtual-ios-devices/ 11. https://en.wikipedia.org/wiki/Trust,_but_verify Hidden links: 12. https://tidbits.com/wp/../uploads/2019/01/ShowStoppers-2019-2.jpg .