https://www.eff.org/deeplinks/2021/08/apples-plan-think-different-about-encryption-opens-backdoor-your-private-life Skip to main content * About + Contact + Press + People + Opportunities * Issues + Free Speech + Privacy + Creativity and Innovation + Transparency + International + Security * Our Work + Deeplinks Blog + Press Releases + Events + Legal Cases + Whitepapers * Take Action + Action Center + Electronic Frontier Alliance + Volunteer * Tools + Privacy Badger + HTTPS Everywhere + Surveillance Self-Defense + Certbot + Atlas of Surveillance + Cover Your Tracks + Crocodile Hunter * Donate + Donate to EFF + Shop + Other Ways to Give + Membership FAQ * Donate + Donate to EFF + Shop + Other Ways to Give * Search form Search [ ] --------------------------------------------------------------------- Email updates on news, actions, and events in your area. Join EFF Lists * Copyright (CC BY) * Trademark * Privacy Policy * Thanks Electronic Frontier Foundation Donate Electronic Frontier Foundation * About + Contact + Press + People + Opportunities * Issues + Free Speech + Privacy + Creativity and Innovation + Transparency + International + Security * Our Work + Deeplinks Blog + Press Releases + Events + Legal Cases + Whitepapers * Take Action + Action Center + Electronic Frontier Alliance + Volunteer * Tools + Privacy Badger + HTTPS Everywhere + Surveillance Self-Defense + Certbot + Atlas of Surveillance + Cover Your Tracks + Crocodile Hunter * Donate + Donate to EFF + Shop + Other Ways to Give + Membership FAQ * Donate + Donate to EFF + Shop + Other Ways to Give * Search form Search [ ] Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life DEEPLINKS BLOG By India McKinney and Erica Portnoy August 5, 2021 [OG-AppleFB] Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life Share It Share on Twitter Share on Facebook Copy link [OG-AppleFBIKeys] * Espanol Apple has announced impending changes to its operating systems that include new "protections for children" features in iCloud and iMessage. If you've spent any time following the Crypto Wars, you know what this means: Apple is planning to build a backdoor into its data storage system and its messaging system. Child exploitation is a serious problem, and Apple isn't the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor. To say that we are disappointed by Apple's plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple's compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company's leadership in privacy and security. There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts--that is, accounts designated as owned by a minor--for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents. When Apple releases these "client-side scanning" functionalities, users of iCloud Photos, child users of iMessage, and anyone who talks to a minor through iMessage will have to carefully consider their privacy and security priorities in light of the changes, and possibly be unable to safely use what until this development is one of the preeminent encrypted messengers. Apple Is Opening the Door to Broader Abuses We've said it before, and we'll say it again now: it's impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger's encryption itself and open the door to broader abuses. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change. All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children's, but anyone's accounts. That's not a slippery slope; that's a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of "misinformation" in 24 hours may apply to messaging services. And many other countries--often those with authoritarian governments--have passed similar laws. Apple's changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers. We've already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of "terrorist" content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it's therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as "terrorism," including documentation of violence and repression, counterspeech, art, and satire. Image Scanning on iCloud Photos: A Decrease in Privacy Apple's plan for scanning photos that get uploaded into iCloud Photos is similar in some ways to Microsoft's PhotoDNA. The main product difference is that Apple's scanning will happen on-device. The (unauditable) database of processed CSAM images will be distributed in the operating system (OS), the processed images transformed so that users cannot see what the image is, and matching done on those transformed images using private set intersection where the device will not know whether a match has been found. This means that when the features are rolled out, a version of the NCMEC CSAM database will be uploaded onto every single iPhone. The result of the matching will be sent up to Apple, but Apple can only tell that matches were found once a sufficient number of photos have matched a preset threshold. Once a certain number of photos are detected, the photos in question will be sent to human reviewers within Apple, who determine that the photos are in fact part of the CSAM database. If confirmed by the human reviewer, those photos will be sent to NCMEC, and the user's account disabled. Again, the bottom line here is that whatever privacy and security aspects are in the technical details, all photos uploaded to iCloud will be scanned. Make no mistake: this is a decrease in privacy for all iCloud Photos users, not an improvement. Currently, although Apple holds the keys to view Photos stored in iCloud Photos, it does not scan these images. Civil liberties organizations have asked the company to remove its ability to do so. But Apple is choosing the opposite approach and giving itself more knowledge of users' content. Machine Learning and Parental Notifications in iMessage: A Shift Away From Strong Encryption Apple's second main new feature is two kinds of notifications based on scanning photos sent or received by iMessage. To implement these notifications, Apple will be rolling out an on-device machine learning classifier designed to detect "sexually explicit images." According to Apple, these features will be limited (at launch) to U.S. users under 18 who have been enrolled in a Family Account. In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the "parent" will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification. Similarly, if the under-13 child receives an image that iMessage deems to be "sexually explicit", before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent's device. This means that if--for instance--a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be "explicit" or that the recipient's parent will be notified. The recipient's parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the "sexually explicit image" cannot be deleted from the under-13 user's device. Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user's shoulder--and in the case of under-13s, that's essentially what Apple has given parents the ability to do. These notifications give the sense that Apple is watching over the user's shoulder--and in the case of under-13s, that's essentially what Apple has given parents the ability to do. It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly "sexually explicit" content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook's attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen's Little Mermaid. These filters have a history of chilling expression, and there's plenty of reason to believe that Apple's will do the same. Since the detection of a "sexually explicit image" will be using on-device machine learning to scan the contents of messages, Apple will no longer be able to honestly call iMessage "end-to-end encrypted." Apple and its proponents may argue that scanning before or after a message is encrypted or decrypted keeps the "end-to-end" promise intact, but that would be semantic maneuvering to cover up a tectonic shift in the company's stance toward strong encryption. Whatever Apple Calls It, It's No Longer Secure Messaging As a reminder, a secure messaging system is a system where no one but the user and their intended recipients can read the messages or otherwise analyze their contents to infer what they are talking about. Despite messages passing through a server, an end-to-end encrypted message will not allow the server to know the contents of a message. When that same server has a channel for revealing information about the contents of a significant portion of messages, that's not end-to-end encryption. In this case, while Apple will never see the images sent or received by the user, it has still created the classifier that scans the images that would provide the notifications to the parent. Therefore, it would now be possible for Apple to add new training data to the classifier sent to users' devices or send notifications to a wider audience, easily censoring and chilling speech. But even without such expansions, this system will give parents who do not have the best interests of their children in mind one more way to monitor and control them, limiting the internet's potential for expanding the world of those whose lives would otherwise be restricted. And because family sharing plans may be organized by abusive partners, it's not a stretch to imagine using this feature as a form of stalkerware. People have the right to communicate privately without backdoors or censorship, including when those people are minors. Apple should make the right decision: keep these backdoors off of users' devices. Related Issues Security Privacy Share It Share on Twitter Share on Facebook Copy link Join EFF Lists Join Our Newsletter! Email updates on news, actions, events in your area, and more. Email Address [ ] Postal Code (optional) [ ] Anti-spam question: Enter the three-letter abbreviation for Electronic Frontier Foundation: [ ] Don't fill out this field (required) [ ] [Submit] Thanks, you're awesome! Please check your email for a confirmation link. Oops something is broken right now, please try again later. Related Updates [eff-pr-og] Press Release | July 7, 2021 EFF Gets $300,000 Boost from Craig Newmark Philanthropies to Protect Journalists and Fight Consumer Spyware San Francisco - The Electronic Frontier Foundation (EFF) is proud to announce its latest grant from Craig Newmark Philanthropies: $300,000 to help protect journalists and fight consumer spyware."This donation will help us to develop tools and training for both working journalists and student journalists, preparing them to protect themselves... Cat Astronaut Soars Through Cyberspace, Leaving a Glowing Rainbow Trail Deeplinks Blog by Daly Barnett | June 7, 2021 Security Tips for Online LGBTQ+ Dating Dating is risky. Aside from the typical worries of possible rejection or lack of romantic chemistry, LGBTQIA people often have added safety considerations to keep in mind. Sometimes staying in the proverbial closet is a matter of personal security. Even if someone is open with their community about being LGBTQ+,... A cat shown writing a program to hack into a can of cat food. Deeplinks Blog by Aaron Mackey, Kurt Opsahl | June 3, 2021 Van Buren is a Victory Against Overbroad Interpretations of the CFAA, and Protects Security Researchers The Supreme Court's Van Buren decision today overturned a dangerous precedent and clarified the notoriously ambiguous meaning of "exceeding authorized access" in the Computer Fraud and Abuse Act, the federal computer crime law that's been misused to prosecute beneficial and important online activity. The decision is a victory... [og-laptop] Deeplinks Blog by Andrew Crocker, Kurt Opsahl | June 3, 2021 Supreme Court Overturns Overbroad Interpretation of CFAA, Protecting Security Researchers and Everyday Users EFF has long fought to reform vague, dangerous computer crime laws like the CFAA. We're gratified that the Supreme Court today acknowledged that overbroad application of the CFAA risks turning nearly any user of the Internet into a criminal based on arbitrary terms of service. We remember the... [privacy-test-image] Deeplinks Blog by Gennie Gebhart, Eva Galperin, Kurt Opsahl | May 20, 2021 Fighting Disciplinary Technologies An expanding category of software, apps, and devices is normalizing cradle-to-grave surveillance in more and more aspects of everyday life. At EFF we call them "disciplinary technologies." They typically show up in the areas of life where surveillance is most accepted and where power imbalances are the norm: in our... A DNA icon in a circle Deeplinks Blog by Hannah Zhao | May 14, 2021 How Your DNA--or Someone Else's--Can Send You to Jail Although DNA is individual to you--a "fingerprint" of your genetic code--DNA samples don't always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple... [encryption-og-2_0] Deeplinks Blog by Bill Budington | May 13, 2021 FAQ: DarkSide Ransomware Group and Colonial Pipeline With the attack on Colonial Pipeline by a ransomware group causing panic buying and shortages of gasoline on the US East Coast, many are left with more questions than answers to what exactly is going on. We have provided a short FAQ to the most common technical questions that are... [eff-pr-og] Press Release | April 15, 2021 EFF Partners with DuckDuckGo to Enhance Secure Browsing and Protect User Information on the Web San Francisco, California--Boosting protection of Internet users' personal data from snooping advertisers and third-party trackers, the Electronic Frontier Foundation (EFF) today announced it has enhanced its groundbreaking HTTPS Everywhere browser extension by incorporating rulesets from DuckDuckGo Smarter Encryption.The partnership represents the next step in the... [eff-pr-og] Press Release | March 10, 2021 EFF to Supreme Court: Users Must Be Able to Hold Tech Companies Accountable in Lawsuits When Their Data is Mishandled Washington, D.C.--The Electronic Frontier Foundation (EFF) today urged the Supreme Court to rule that consumers can take big tech companies like Facebook and Google to court, including in class action lawsuits, to hold them accountable for privacy and other user data-related violations, regardless of whether they can show they suffered... The number 2020 in a glitchy screen Deeplinks Blog by Bill Budington | January 2, 2021 DNS, DoH, and ODoH, Oh My: Year-in-Review 2020 Government knowledge of what sites activists have visited can put them at risk of serious injury, arrest, or even death. This makes it a vitally important priority to secure DNS. DNS over HTTPS (DoH) is a protocol that encrypts the Domain Name System (DNS) by performing lookups over the secure... Join Our Newsletter! Email updates on news, actions, events in your area, and more. Email Address [ ] Postal Code (optional) [ ] Anti-spam question: Enter the three-letter abbreviation for Electronic Frontier Foundation: [ ] Don't fill out this field (required) [ ] [Submit] Thanks, you're awesome! Please check your email for a confirmation link. Oops something is broken right now, please try again later. Share It Share on Twitter Share on Facebook Copy link Related Issues Security Privacy Back to top EFF Home Follow EFF: * twitter * facebook * instagram * youtube * flicker * rss Contact * General * Legal * Security * Membership * Press About * Calendar * Volunteer * Victories * History * Internships * Jobs * Staff * Diversity & Inclusion Issues * Free Speech * Privacy * Creativity & Innovation * Transparency * International * Security Updates * Blog * Press Releases * Events * Legal Cases * Whitepapers * EFFector Newsletter Press * Press Contact Donate * Join or Renew Membership Online * One-Time Donation Online * Shop * Other Ways to Give * Copyright (CC BY) * Trademark * Privacy Policy * Thanks JavaScript license information *