[HN Gopher] ImageNet contains naturally occurring Apple NeuralHa...
       ___________________________________________________________________
        
       ImageNet contains naturally occurring Apple NeuralHash collisions
        
       Author : yeldarb
       Score  : 664 points
       Date   : 2021-08-19 16:45 UTC (6 hours ago)
        
 (HTM) web link (blog.roboflow.com)
 (TXT) w3m dump (blog.roboflow.com)
        
       | jb1991 wrote:
       | > Apple's NeuralHash perceptual hash function performs its job
       | better than I expected and the false-positive rate on pairs of
       | ImageNet images is plausibly similar to what Apple found between
       | their 100M test images and the unknown number of NCMEC CSAM
       | hashes.
        
       | biesnecker wrote:
       | > This is a false-positive rate of 2 in 2 trillion image pairs
       | (1,431,168^2). Assuming the NCMEC database has more than 20,000
       | images, this represents a slightly higher rate than Apple had
       | previously reported. But, assuming there are less than a million
       | images in the dataset, it's probably in the right ballpark.
       | 
       | The number of images in that database could well be far in excess
       | of a million. According to NCMEC [1], in 2020 65.4 million files
       | that were reported to them, and "[s]ince the program inception in
       | 2002, CVIP [child victim identification project] has reviewed
       | more than 330 million images and videos."
       | 
       | Of course many of those were duplicate but it would be entirely
       | unsurprised if there were more than a million original files.
       | 
       | [1] https://www.missingkids.org/ourwork/impact
        
       | alfalfasprout wrote:
       | There's been a lot of focus on the likelihood of collisions and
       | whether someone could upload eg; an image with a matching hash to
       | your device to "set you up", etc. But what's still extremely
       | concerning is that there is still no guarantee that the hash list
       | used can't be coopted for another purpose (eg; politically
       | insensitive content).
        
         | rovr138 wrote:
         | On top of that, what happens if a court/government orders them
         | to give them all the current data about people with matches,
         | regardless of the 30 matches.
         | 
         | They can't say if it is or not a match so they have to go after
         | the individuals. Is that enough evidence for a warrant?
         | 
         | Someone in the court thinks it's true and can't prosecute?,
         | _oh, it got leaked_.
         | 
         | --
         | 
         | Not every country has the same protections about innocent until
         | proven guilty. And even then, we've seen cases in the US where
         | someone has been held in jail indefinitely until they provide a
         | password,
         | 
         | - https://arstechnica.com/tech-policy/2016/04/child-porn-
         | suspe...
         | 
         | - https://nakedsecurity.sophos.com/2016/04/28/suspect-who-
         | wont...
        
           | int_19h wrote:
           | More broadly speaking, every part of this scheme that is
           | currently an arbitrary Apple decision (and not a
           | technological limitation), can easily become an arbitrary
           | government decision.
           | 
           | And yes, it's true that the governments could always mandate
           | such scanning before. The difference is that it'll be much
           | harder politically for Apple to push back against tweaks to
           | the scheme (such as lowering the bar for manual review /
           | notification of authorities) if they already have it rolled
           | out successfully and publicly argued that it's acceptable in
           | principle, as opposed to pushing back against any kind of
           | scanning at all.
           | 
           | Once you establish that something is okay _in principle_ ,
           | the specifics can be haggled over. I mean, just imagine this
           | conversation in a Congressional hearing:
           | 
           | "So, you only report if there are 30+ CSAM images found by
           | the scan. Does this mean that pedophiles with 20 CSAM images
           | on their phones are not reported?"
           | 
           | "Well... yes."
           | 
           | "And how did you decide that 30 is the appropriate number?
           | Why not 20, or 10? Do you maybe think that going after CSAM
           | is not _that_ important, after all? "
           | 
           | There's a very old joke along these lines that seems
           | particularly appropriate here:
           | 
           | "Churchill: Madam, would you sleep with me for five million
           | pounds?
           | 
           | Socialite: My goodness, Mr. Churchill... Well, I suppose...
           | we would have to discuss terms, of course...
           | 
           | Churchill: Would you sleep with me for five pounds?
           | 
           | Socialite: Mr. Churchill, what kind of woman do you think I
           | am?!
           | 
           | Churchill: Madam, we've already established that. Now we are
           | haggling about the price."
           | 
           | Apple has put itself in the position where, from now on,
           | they'll be haggling about the price - and they don't really
           | have much leverage there.
        
           | mthoms wrote:
           | As I understand it, Apple's servers know nothing until the
           | 30+ match threshold is reached. This is actually one way that
           | their system might be an improvement.
           | 
           | NB: I'm not in favour of this system - I'm only commenting on
           | this one specific scenario.
        
         | criticaltinker wrote:
         | The OP mentions that two countries have to agree to add a file
         | to the list, but your concern is definitely valid:
         | 
         |  _> Perhaps the most concerning part of the whole scheme is the
         | database itself. Since the original images are (understandably)
         | not available for inspection, it 's not obvious how we can
         | trust that a rogue actor (like a foreign government) couldn't
         | add non-CSAM hashes to the list to root out human rights
         | advocates or political rivals. Apple has tried to mitigate this
         | by requiring two countries to agree to add a file to the list,
         | but the process for this seems opaque and ripe for abuse. _
        
           | JoshTko wrote:
           | It's probably relatively trivial for the US or China to
           | coerce another county to agree to an image.
        
           | ulzeraj wrote:
           | What if those two countries can be Poland and Hungary? These
           | two countries have been passing lots of laws to ostracize and
           | criminalize pro-LGBT content and are friendly to each other.
        
             | tehnub wrote:
             | Apple did mention in their security thread model document
             | [0] this:                 Apple will also refuse all
             | requests to instruct human reviewers to file reports for
             | anything other than CSAM materials for accounts that exceed
             | the match threshold.
             | 
             | [0]: https://www.apple.com/child-
             | safety/pdf/Security_Threat_Model...
        
               | nybble41 wrote:
               | That's too little, too late. By the time any human
               | reviewer can evaluate whether the images _should_ have
               | been in the database the encryption on the backups has
               | already been circumvented and other parties have already
               | been given access to your private files.
        
               | shadowfiend wrote:
               | That is not how this works. Please read up on the
               | functioning of the system before chiming in with such
               | certainty on its behavior.
               | 
               | The only thing they will have gained access to are the
               | "derivatives" (presumably lower res versions) of the
               | matched photos, which if this is done to frame you is
               | strictly the fake CSAM.
        
               | nybble41 wrote:
               | I'm aware of how the system works. Those are still your
               | private files which were supposed to be encrypted and
               | which were revealed (either partially or fully, makes no
               | difference) to another party. That fact that this review
               | process is even possible means that the encryption on the
               | backups can't be trusted.
        
             | tester756 wrote:
             | > These two countries have been passing lots of laws to
             | criminalize pro-LGBT content
             | 
             | What do you mean
             | 
             | I live in one of those countries and I'd want to be aware
        
               | vineyardmike wrote:
               | https://googlethatforyou.com?q=poland%20anti-lgbt
               | 
               | but actually this is a good starting point:
               | https://en.wikipedia.org/wiki/LGBT_rights_in_Poland
        
             | WesolyKubeczek wrote:
             | Fortunately, Hungary and Poland, even combined, are quite
             | small a fish for Apple to just tell them to go pound sand
             | in some form or other. They can even ban iPhone sales,
             | people will just buy them in Czechia.
             | 
             | It's not like China, or India, who not only have huge
             | markets, but could easily hold a chunk of Apple's supply
             | chain hostage.
             | 
             | It's very easy to uphold human rights if it doesn't
             | actually cost you anything.
        
         | cat199 wrote:
         | this also punts the debate to the checking process instead of
         | the fact that there even is a process to start with..
        
           | version_five wrote:
           | Yes I was about to say the same thing. Hash collisions are an
           | _extra_ concern about what apple is doing, but even if they
           | were as collision free as cryptographic hashes, that would
           | not make the invasion of privacy OK. The technical discussion
           | is something that apple can easily parry and is the wrong
           | framing of this debate.
        
         | Someone wrote:
         | There wasn't any guarantee that Apple didn't have and use such
         | technology before they made this feature public, or even
         | already was running it in the current version of iOS.
         | 
         | If you trusted Apple not to stealthily run such technology
         | before, the question is how much less (if any) you trust them
         | now.
         | 
         | If you didn't, I don't think anything changed.
        
         | bequanna wrote:
         | > there is still no guarantee that the hash list used can't be
         | coopted for another purpose (eg; politically insensitive
         | content).
         | 
         | That isn't a bug, it is a feature and will be the main use of
         | this functionality.
         | 
         | The "preventing child pornography" reasoning was specifically
         | chosen so that Apple could openly coordinate with governments
         | to violate your privacy while avoiding criticism.
        
       | 28194608 wrote:
       | You guys are helping Apple not to fuckup their implementation by
       | finding bugs. It would be great if these collision posts happened
       | after they fully rolled out the update.
        
         | foepys wrote:
         | Once the code landed in iOS and is activated, it is too late.
        
       | hyperbovine wrote:
       | Err, most of these are not naturally occurring pairs since in
       | each case the images differ by a human manipulation (resolution
       | reduction, drawing an arrow, changing the watermark, changing the
       | aspect ratio)---which I'm guessing is viewed as a feature not a
       | bug by the designers of this system. The axe and the nematode
       | comes closest, and even then, they are low-res and visually
       | similar. What would be far more concerning is a hash collision
       | between a picture of a piano and a picture of an elephant, but
       | nothing like that is happening here.
        
       | criticaltinker wrote:
       | _> In order to test things, I decided to search the publicly
       | available ImageNet dataset for collisions between semantically
       | different images. I generated NeuralHashes for all 1.43 million
       | images and searched for organic collisions. By taking advantage
       | of the birthday paradox, and a collision search algorithm that
       | let me search in n(log n) time instead of the naive n^2, I was
       | able to compare the NeuralHashes of over 2 trillion image pairs
       | in just a few hours. _
       | 
       | _> This is a false-positive rate of 2 in 2 trillion image pairs
       | (1,431,168^2). Assuming the NCMEC database has more than 20,000
       | images, this represents a slightly higher rate than Apple had
       | previously reported. But, assuming there are less than a million
       | images in the dataset, it 's probably in the right ballpark. _
       | 
       | It's great to see the ingenuity and attention this whole debacle
       | is receiving from the community. Maybe it will lead to advances
       | in perceptual hashing (and also advances in consumer awareness of
       | tech related privacy issues).
        
         | SilasX wrote:
         | >2 in 2 trillion image pairs
         | 
         | Reporting the collision rate per image _pair_ feels misleading.
         | What you really want to know is the number of false positives
         | per image in the relevant set, not image pair, as that 's the
         | figure that indicates how frequently you'll hit a false
         | positive.
        
           | dhosek wrote:
           | In fact, I'd argue that the collision rate per image pair is
           | _overestimating_ the collision rate. It 's the flip side of
           | the birthday paradox. We don't care that any two images have
           | the same hash, we care about any image having the same hash
           | as one in the set that we're testing against.
        
             | SilasX wrote:
             | >We don't care that any two images have the same hash,
             | 
             | Why not?
        
               | snet0 wrote:
               | Based on the pigeonhole principle alone, it will always
               | be the case that collisions exist. The size of the digest
               | is very likely smaller than the size of any given image.
        
         | bawolff wrote:
         | ImageNet is a very well-known data set. Are we sure apple
         | didn't test on it when designing this algorithm?
        
           | slg wrote:
           | >This is a false-positive rate of 2 in 2 trillion image pairs
           | (1,431,168^2). Assuming the NCMEC database has more than
           | 20,000 images, this represents a slightly higher rate than
           | Apple had previously reported. But, assuming there are less
           | than a million images in the dataset, it's probably in the
           | right ballpark.
           | 
           | Apple reported a pretty similar collision rate so maybe they
           | did.
        
             | pfranz wrote:
             | The only number I've heard from Apple is, "the likelihood
             | that the system would incorrectly identify any given
             | account is less than one in one trillion per year."[1]
             | Which I read as enough false hits to flag an account that
             | year (some interview said that threshold was around 30).
             | That depends on the average number of new photos uploaded
             | to iCloud, the size of the NCMEC database, the threshold
             | for flagging the account, and the error rate of the match.
             | Without knowing most of those numbers it's hard to gauge
             | how close it is.
             | 
             | https://www.apple.com/child-
             | safety/pdf/Expanded_Protections_...
        
             | yeldarb wrote:
             | Their reported collision rate was against their CSAM hash
             | database[1].
             | 
             | > In Apple's tests against 100 million non-CSAM images, it
             | encountered 3 false positives when compared against NCMEC's
             | database. In a separate test of 500,000 adult pornography
             | images matched against NCMEC's database, it found no false
             | positives.
             | 
             | [1] https://tidbits.com/2021/08/13/new-csam-detection-
             | details-em...
        
               | slg wrote:
               | I don't follow the point you are making here. The goal of
               | the algorithm is to match images so I would expect
               | similar collision rates regardless of what image was
               | matched against what set of hashes. The exact rate of
               | collision will obviously vary slightly.
        
               | javierbyte wrote:
               | I think the point is that if Apple did optimize the
               | algorithm using ImageNet then they and we are seeing the
               | same best case scenario.
        
           | robertoandred wrote:
           | Nah, the consensus online seems to be that Apple hires naive,
           | inept script kiddies and that any rando on GitHub can prove
           | without question that Apple's solution is flawed.
        
       | MilaM wrote:
       | I have a suggestion for Apple. Maybe they could use this
       | technique in a way that benefits all their customers by finally
       | solving the problem of duplicate image imports in Photos.app. I
       | have a couple of hundred duplicate images in my Library because
       | of problems during local import from my iPhone into Photos.
        
       | [deleted]
        
       | vhold wrote:
       | > _By taking advantage of the birthday paradox, and a collision
       | search algorithm that let me search in n(log n) time instead of
       | the naive n^2, I was able to compare the NeuralHashes of over 2
       | trillion image pairs in just a few hours._
       | 
       | I think you could just do "sort | uniq -c | sort -nr" on the
       | neuralhash values to find the most frequently occurring ones
       | pretty fast?
        
         | occamrazor wrote:
         | That's exactly the O(n log n) algorithm.
        
       | [deleted]
        
       | cheald wrote:
       | The demonstrated ability to produce collisions at will should be
       | an instant show-stopper for this feature.
       | 
       | If a bad actor can send targets innocent images which hash to
       | CSAM, you essentially have an upgraded secret swatting mechanism
       | that people _will_ use to abuse others.
        
       | zug_zug wrote:
       | Here's one concern -- Neural Hash is finding false positives with
       | images that look very similar. What if an underage girl takes a
       | selfie (or over-age girl) and the pose or background features are
       | similar enough to trigger a false collision. Then Apple is going
       | to a manual step where they are able to look at a low-res version
       | of somebody's private photos as I understand it.
       | 
       | In my opinion that last step is not okay. I suppose the 30-image
       | threshhold is a mitigating factor, really imo Apple is making
       | their problem into _my_ problem. I want to purchase from a
       | company that offers me the peace of mind not to even have to
       | think about such concerns, price isn 't an obstacle. If Apple
       | can't cater to my needs I hope another company will.
        
       | joelbondurant wrote:
       | Apple employees will do anything to keep their jobs looking nat
       | kiddie porn.
        
       | matsemann wrote:
       | > _By taking advantage of the birthday paradox, and a collision
       | search algorithm that let me search in n(log n) time instead of
       | the naive n^2_
       | 
       | Someone got more details on that? How does the birthday paradox
       | come into play here?
        
         | throwaway287391 wrote:
         | Yeah I'm unclear on why this even requires a clever algorithm.
         | I'd think that given the 1.4 million precomputed hashes, a
         | simple/naive Python function (<10 lines) that builds a dict
         | mapping hash -> (list of images with that hash) could surface
         | all the collisions in a few seconds. (It's a cool article
         | though! I'm glad someone tested this.)
         | 
         | Edit: I'm procrastinating so I tried it. It's 8 lines including
         | IO/parsing and runs in 2.8 seconds on my laptop:
         | import collections       hash_to_filenames =
         | collections.defaultdict(list)       with open('hashes.txt') as
         | f:         for line in f.readlines():           filename, hash
         | = line.strip().split()
         | hash_to_filenames[hash].append(filename)       dupes = {h: fs
         | for h, fs in hash_to_filenames.items() if len(fs) > 1}
         | print(f'Done. Found {len(dupes)} dupes.')
         | 
         | (hashes.txt is from the zip in the github repo, and it finds
         | 8865 dupes which looks _almost_ right from the article text
         | (8272 + 595 = 8867).)
        
         | aaaaaaaaaaab wrote:
         | https://en.m.wikipedia.org/wiki/Birthday_attack
        
         | [deleted]
        
         | sweezyjeezy wrote:
         | He means that even though the chance of two random images
         | colliding is ~ 1/ 2 trillion, once you get up to a set of order
         | sqrt(2 trillion) you have a good chance of having a collision
         | amongst all pairs.
        
           | throwaway287391 wrote:
           | That explains the "birthday paradox" part, what I'm unclear
           | on is the need for a "collision search algorithm" that isn't
           | just "build a hashmap" which should take roughly O(N) time.
           | (I suppose it could just be that, but I'm surprised it's even
           | mentioned in that case. In my uncle(?) comment I wrote an 8
           | line Python implementation that runs in 3 seconds on my
           | laptop.)
        
         | mvanaltvorst wrote:
         | The birthday paradox and the algorithm are not related; the
         | birthday paradox is simply the phenomenon that even though
         | there are several orders of magnitude more hashes than images
         | in ImageNet, it is still likely that collisions exist.
         | 
         | The algorithm sounds like a simple tree search algorithm. Let's
         | consider the naive case: traverse all images, and keep a list
         | of hashes you have already visited. For every extra image, you
         | have to traverse all previous n hashes you have previously
         | computed. Naively doing this check with a for loop would take
         | O(n) time. You have to do this traversion for every image,
         | therefore total time complexity is O(n^2).
         | 
         | Fortunately, there is a faster way to check whether you have
         | found a hash before. Imagine sorting all the previous hashes
         | and storing them in an ordered list. A smarter algorithm would
         | check the middle of the list, and check whether this element is
         | higher or lower than the target hash. When your own hash is
         | higher than the middle hash, you know that if your hash is
         | contained within the list, it is contained in the top half. In
         | a single iteration you have halved the search space. By
         | repeating this over and over you can figure out if your item is
         | contained within this list in just log_2(n) steps. This is
         | called binary search. Some of the details are more intricate
         | (e.g. Red-Black trees [1], where you can skip the whole sorting
         | step) but this is the gist of it.
         | 
         | This all sounds way more complicated than it is in practice. In
         | practice you would simply `include <set>;` and all the tree
         | calculations are done behind the scenes. The algorithm
         | contained within the library is clever, but the program written
         | by the author is probably <10 lines of code.
         | 
         | [1]: https://en.wikipedia.org/wiki/Red-black_tree
        
       | JimBlackwood wrote:
       | It is actually really cool to see that those small changes
       | generate the same hash. At least those parts are working well.
        
       | tehnub wrote:
       | > In order to test things, I decided to search the publicly
       | available ImageNet dataset for collisions between semantically
       | different images. I generated NeuralHashes for all 1.43 million
       | images and searched for organic collisions. By taking advantage
       | of the birthday paradox, and a collision search algorithm that
       | let me search in n(log n) time instead of the naive n^2, I was
       | able to compare the NeuralHashes of over 2 trillion image pairs
       | in just a few hours.
       | 
       | I don't know what the author means by "taking advantage of the
       | birthday paradox". If they're referring to the "birthday attack"
       | [0], I don't think it makes sense. The birthday attack is a
       | strategy that helps you find a collision without hashing every
       | single image, but he states that he already generated
       | NeuralHashes for all 1.43 million images.
       | 
       | Furthermore, isn't there a simple linear time algorithm to detect
       | collisions given that you already have all the hashes? Iterate
       | over your NeuralHashes and put them into a hash table where the
       | NeuralHash is the key, and the number of occurrences is the
       | value. Whenever you insert something into a bucket and there's
       | already something there, you have a neural hash collision.
       | 
       | [0]:
       | https://en.wikipedia.org/wiki/Birthday_problem#Probability_t...
        
         | sweezyjeezy wrote:
         | By "taking advantage of the Birthday Paradox" - he means that
         | even though the chance of two random images colliding is ~ 1/1
         | trillion, if you have a set of size ~ sqrt(1 trillion) you have
         | a good chance of having a collision amongst all pairs.
        
           | tehnub wrote:
           | But did the birthday paradox inform his procedure in any way?
           | I guess he wouldn't have even attempted to do this at all if
           | he didn't think it was likely to find collisions.
        
         | tibbar wrote:
         | 1) the point of the birthday paradox is that even if two
         | elements of a set are unlikely to overlap, in a large set it is
         | much more likely than perhaps intuitive that some pair of
         | elements overlap. 2) I'm assuming he did something like putting
         | all the hashes in a list and sorting them, which is at least
         | better than looking at each pair. As you say, it's not optimal
         | and also doesn't seem particularly worth including in the
         | otherwise very interesting post
        
       | kizer wrote:
       | Rehash everything using 512 bits or something.
        
         | mrtranscendence wrote:
         | Would that make a difference? Technically 96 bits is more than
         | enough to assign a unique hash to every image that ever
         | existed, many times over.
        
       | endisneigh wrote:
       | It doesn't matter if there are collisions if the two images don't
       | actually look the same. Do people honestly believe a single CSAM
       | flag from an "innocent" image is going to result in someone going
       | to prison in America?
       | 
       | PhotoDNA has existed for over a decade doing the same thing with
       | no instances that I have heard of.
       | 
       | If some corrupt government wants to get you they don't need this.
       | They can just unilaterally say you've done something bad without
       | evidence and imprison you. It happens all the time. It's even
       | happened in America. Just look up DNA exonerations - people have
       | had DNA on the scene that literally proves their innocence and
       | they're still locked up.
       | 
       | People should care about their governments being corrupt, not
       | necessarily about this.
        
         | stickfigure wrote:
         | Let's say I get you to click on a link. That link contains a
         | thumbnail gallery of CSAM. With the right CSS, you might not
         | even notice it, but it's in your browser's cache and on your
         | filesystem. Lots of pictures - more than enough for your phone
         | to snitch on you.
         | 
         |  _All because you clicked on a link._
         | 
         | Phishing attacks can now put you in prison, label you as a
         | pedophile and sex offender, and destroy your life.
        
           | endisneigh wrote:
           | This is already possible with PhotoDNA. When has it happened?
        
             | stickfigure wrote:
             | Your browser cache is not being synced to iCloud; Apple
             | doesn't see it. So no, it is not currently possible.
        
               | endisneigh wrote:
               | I'm not talking about Apple, I'm saying in general
               | technology has already been deployed to make what you're
               | describing possible. So where's the evidence of abuse?
        
               | stickfigure wrote:
               | No vendors are snooping your phone/computer browser
               | cache, so this attack vector does not yet exist. Apple is
               | building it.
        
               | zimpenfish wrote:
               | > No vendors are snooping your phone/computer browser
               | cache
               | 
               | a) That you know of b) Apple won't be doing this either
               | 
               | > Apple is building it.
               | 
               | No, they aren't.
        
           | dylan604 wrote:
           | How does this fit within what Apple has stated will be the
           | operating procedure for this scanning?
        
             | stickfigure wrote:
             | I don't know. But if the answer is "we won't snoop your
             | browser cache", then people will quickly learn to store
             | offending material under browser caches. Then Apple will
             | have to start scanning browser caches. There is no policy
             | fix for this; pedophiles aren't stupid.
        
               | mrtranscendence wrote:
               | Absolutely no one involved -- no one who conceived of
               | this effort, no one who implemented it, and no one who
               | defends it -- was under the impression that you couldn't
               | get around this by not storing your images on iCloud. The
               | idea is that it will catch pedophiles who aren't careful
               | or tech-savvy enough, and such people exist, trust me. I
               | suspect pedophiles aren't especially smarter than
               | average, and most normal people aren't even going to be
               | aware something like this exists.
        
         | Dah00n wrote:
         | Lots of things start out like this, then later they start
         | looking for or blocking other things. Pirated movies, files
         | leaked from three letter agencies, a picture of the president
         | with a ** in his mouth, etc. Even if this is 100% bullet-proof
         | it is still enough that Apple should be seen as privacy
         | invading and leaking everything to the government (and others
         | later on) as it will be abused if implemented everywhere. This
         | isn't irrelevant because of other things happening that are
         | also bad. This is added on-top of those broken system, like the
         | ones in the US you mention.
        
         | Gibbon1 wrote:
         | Well there is this here
         | 
         | https://en.wikipedia.org/wiki/Paul_Reubens#2002_pornography_...
        
         | sennight wrote:
         | We are talking about how everyone who gave Apple money now has
         | a potential probable cause vector that they didn't before.
         | Everyone running the software is a suspect by default. Ask
         | black Americans how they feel about setting the bar low for
         | probable cause.
         | 
         | "Following the 2004 Madrid train bombings, fingerprints on a
         | bag containing detonating devices were found by Spanish
         | authorities. The Spanish National Police shared the
         | fingerprints with the FBI through Interpol. Twenty possible
         | matches for one of the fingerprints were found in the FBI
         | database and one of the possible matches was Brandon Mayfield.
         | His prints were in the FBI database as they were taken as part
         | of standard procedure when he joined the military."
         | 
         | "The FBI described the fingerprint match as '100% verified'."
         | 
         | https://en.wikipedia.org/wiki/Brandon_Mayfield
        
           | kmbfjr wrote:
           | Reading about incidences such as this has made me think
           | critically about all cloud services in the United States, and
           | the conclusion is simply not to use them.
           | 
           | Sure, the probability is lower than getting struck by
           | lightning. I certainly don't play in the rain and I won't be
           | using cloud services where I'm exposed to this kind of
           | nonsense with the FBI.
        
             | sennight wrote:
             | > Sure, the probability is lower than getting struck by
             | lightning.
             | 
             | I don't think anyone can actually know that, because I
             | don't think statistics are kept on how often these sort of
             | dragnet programs result in civil liberty violations and
             | secret grand juries. That should be the more immediate
             | concern, because after that point you are depending on the
             | goodwill of prosecutors... which is a super bad idea.
        
           | dane-pgp wrote:
           | > '100% verified'
           | 
           | Just reading those words is rage-inducing, but I'm grateful
           | to have learnt this example of government lying. I feel like
           | it should become an expression that societies teach to their
           | children to warn them about abuses of power. Other mottoes
           | synonymous with government deception and corruption come to
           | mind, but at the risk of being too controversial I will share
           | only their initials and dates: "SAARH" (2013), "MA" (2003),
           | "IANAC" (1973), "NHDAEMZE" (1961), "AMF" (1940).
        
             | sennight wrote:
             | Because I delight in delivering bad news, I'll point out
             | that the real takeaway shouldn't be that federal LEOs
             | regularly lie (though, they do) - it is that they are
             | permitted to lie convincingly through handwavy technical
             | means. All these tools are designed to give them permission
             | to totally destroy your life. I'm aware of only two geewiz
             | CSI methods that are actually founded in science:
             | cryptography (this neural crap doesn't qualify) and DNA.
             | Unlike fingerprints and bitemark analysis, those two tools
             | were invented outside of law enforcement - instead of being
             | purpose built for prosecution. Anybody doubting that should
             | look into the history of the polygraph and its continued
             | use in the face of evidence demonstrating how useless it is
             | in the pursuit of truth... which begs the question: if they
             | aren't interested in the truth, what are they doing?
             | 
             | https://en.wikipedia.org/wiki/Polygraph
        
         | InTheArena wrote:
         | A honest question I have is if PhotoDNA results where ever
         | validated? Is there a chance that collisions have been
         | occurring with no one looking to see if actual CSAM was in use?
        
         | WesolyKubeczek wrote:
         | You know, it's rather not okay to treat every smartphone user
         | out there as a potential criminal just because they happen to
         | have photos on their devices. At least in an alleged democracy
         | where there's this presumption of innocence thing.
         | 
         | Even Pegasus wasn't that much rotten, it at least wasn't
         | indiscriminately installed onto everyone's phone.
         | 
         | But what can I say, there wasn't much uproar about piracy tax
         | on blank CD-R(W) media back in the day, so why not have that
         | now. And eventually we go peak USSR where everyone and their
         | dog is suspect and whoever is arrested is the enemy of the
         | people. Yay, it's somehow reassuring to know I won't live long
         | enough to see it.
        
           | nonbirithm wrote:
           | And it's fine to treat everyone as a potential criminal when
           | we entrust our data on the same company's servers? No matter
           | if on-device scanning makes surveillance easier than ever,
           | the surveillance itself was still a significant possibility
           | up to this point with server-side scanning. Imagine how many
           | petabytes of user data already exist in the cloud.
        
             | short_sells_poo wrote:
             | So in your mind because bad thing X is already happening,
             | it's completely OK for bad thing XY to also start
             | happening?
        
         | Youden wrote:
         | > It doesn't matter if there are collisions if the two images
         | don't actually look the same.
         | 
         | Is that really true? My understanding is that the manual
         | reviewers at Apple only see some kind of low-resolution proxy,
         | not the full-resolution image. I'd also be shocked if the human
         | reviewers were shown the original, actually CP image, to
         | compare to.
         | 
         | Given that, it's not necessary to produce an actual visual
         | match, it's just necessary to produce an image that when
         | scaled-down looks like CSAM (e.g. take an actual photo of a kid
         | at the beach and photoshop in some skin-coloured swimwear with
         | creases in the right places).
         | 
         | > Do people honestly believe a single CSAM flag from an
         | "innocent" image is going to result in someone going to prison
         | in America?
         | 
         | The attack I'd worry about here is similar to swatting. Someone
         | who doesn't like me sends a bunch of images like the ones I
         | described above (not just one), they end up synced to iCloud
         | (because Apple wants _everything_ synced to iCloud) and Apple
         | reports me to the authorities, who end up knocking at my door
         | and arresting me.
         | 
         | Even though I'm innocent, I'll probably have most of my
         | computers confiscated for a while and spend a few days locked
         | up.
         | 
         | > PhotoDNA has existed for over a decade doing the same thing
         | with no instances that I have heard of.
         | 
         | PhotoDNA's algorithms and hashes aren't public, so it's not
         | clear how an attacker would exploit PhotoDNA in the way that
         | people are afraid will be done for Apple.
         | 
         | PhotoDNA also isn't, as far as I know, part of a product that
         | aims to create unprotected backups of the phones of nearly a
         | billion users. Apple really wants you to upload your whole
         | phone to iCloud. The only comparable alternative is Google's
         | Android backup but Google does the right thing and end-to-end
         | encrypts that.
        
           | robertoandred wrote:
           | Why would you save all of the almost-CSAM pics the attacker
           | sends you to your photo library?
        
             | Youden wrote:
             | Manually, perhaps because the attacker crafts the messages
             | to make that desirable.
             | 
             | But a bigger concern is automatically. WhatsApp for example
             | can automatically save all received photos to your Camera
             | Roll, which is of course automatically backed up to iCloud
             | for many people. So an attacker could potentially just send
             | you a bundle of 40 images of however many and your phone
             | automatically sends it to Apple.
        
         | minitoar wrote:
         | The damage is already done by the time it gets to the point of
         | devices being confiscated.
        
           | mrtranscendence wrote:
           | If you don't actually have CSAM then the person at Apple who
           | visually audits the collision set will not send your info to
           | law enforcement, and nothing will be confiscated. Apple
           | doesn't just take any instance of a collision and send it to
           | the FBI.
        
             | short_sells_poo wrote:
             | Right, because humans are infallible and never abuse their
             | position either.
             | 
             | And we are not talking about some underpaid contractors
             | making life altering decisions.
             | 
             | And we are also not talking about a system where just a
             | false accusation can and will destroy one's life
             | irreparably.
        
           | endisneigh wrote:
           | Can you point to a single instance of that happening where it
           | was due to a false positive?
        
             | bawolff wrote:
             | In a proposed system that has yet to be deployed?
        
               | endisneigh wrote:
               | This has already existed in PhotoDNA since 2008. This is
               | not new
        
               | bawolff wrote:
               | But that's deployed in a very different way which makes
               | the concerns being discussed much less likely to happen.
               | 
               | Specificly the person doing the scanning already has
               | access to photos and can double check the results without
               | having to sieze the device, a rather public process.
        
               | toby- wrote:
               | Examples of on-device scanning via PhotoDNA?
        
               | robertoandred wrote:
               | People's photo libraries are scanned. The result is the
               | same.
        
             | Unklejoe wrote:
             | If we reduce it down to "someone being accused of a crime
             | but later being found innocent", then there are several,
             | and when it comes to sex crimes, the accusation alone is
             | enough to ruin someone. People don't care about minor
             | details such as the fact that the person was later found
             | innocent. To them, it's the same as getting off on a
             | technicality.
        
               | endisneigh wrote:
               | Ok yes - now then, PhotoDNA has existed since 2008. When
               | has what you've described ever happened?
        
           | rootusrootus wrote:
           | By the time the FBI comes knocking for your devices, they
           | have a lot of evidence, not a list of hash collisions.
        
             | headShrinker wrote:
             | > By the time the FBI comes knocking for your devices, they
             | have a lot of evidence, not a list of hash collisions.
             | 
             | Not necessarily. The FBI knocks on your door when they have
             | convinced a warrant-signing judge, that's there is
             | probable-cause, or that additional information can be
             | collected to build a case in which the defendant will
             | 'plead' or make a deal for sentence reduction. 90% of
             | defendants make a deal and never go to trial because by
             | that time the stakes are so high a guilty verdict includes
             | the harshest possible sentence.
             | 
             | FBI only needs to convince you they can win a case or the
             | judge that there might be fruit behind a locked door, they
             | don't need direct evidence. It's really up to the judge.
        
             | stjohnswarts wrote:
             | I doubt it, all they need is this stuff and they can get a
             | warrant to rummage through your stuff and take all your
             | computers, usb drives, etc and also put your name on a
             | watch list and your permanent record. Much like newspapers
             | retracting mistakes if the story doesn't pan out, it goes
             | on the back page. Plenty enough to wreck your life.
        
         | zug_zug wrote:
         | And what if the FBI demands a list of all Apple users who have
         | matched even 1 CSAM photo for their own private watchlist?
        
           | mrtranscendence wrote:
           | And what if the FBI demands that Apple leadership genuflect
           | toward an oil painting of J Edgar Hoover? The FBI can demand
           | all sorts of things. In this case, Apple isn't tracking
           | collisions as small as 1 photo.
        
             | [deleted]
        
             | [deleted]
        
           | theluketaylor wrote:
           | There are a lot of really valid criticisms of Apple plan
           | here, but Apple has gone out of their way to prevent that
           | exact case. Apple is using secret splitting to make sure they
           | cannot decode the CSAM ticket until the threshold is reached.
           | Devices also produce some synthetic matches to prevent
           | themselves Apple (or anyone else) inferring a pre-threshold
           | count based on the number of vouchers.
           | 
           | https://www.apple.com/child-
           | safety/pdf/CSAM_Detection_Techni...
           | 
           | Threshold Secret Sharing Synthetic Match Vouchers
        
             | zug_zug wrote:
             | Thanks. You're right. Very sophisticated system. Still find
             | it morally repugnant, but technologically it is fairly
             | thorough.
        
         | bawolff wrote:
         | If the two images looked the same, then the expected behaviour
         | is a collision, so if collisions matter at all, it would only
         | be for pictures that look different.
        
           | endisneigh wrote:
           | They don't matter because if two images don't look the same,
           | but collide - then human processes will absolve you. This
           | isn't some AI that sends you straight to prison lol
        
             | yusefnapora wrote:
             | Imagine this scenario.
             | 
             | - You receive some naughty (legal!) images of a naked young
             | adult while flirting online and save them to your camera
             | roll.
             | 
             | - These images have been made to collide [1] with "well
             | known" CSAM images obtained from the dark underbelly of the
             | internet, on the assumption that their hashes will be
             | contained in the encrypted database.
             | 
             | - Apple's manual review kicks in because you have enough
             | such images to trigger the threshold.
             | 
             | - The human reviewer sees a bunch of thumbnails of naked
             | people whose age is indeterminate but looks to be on the
             | young side.
             | 
             | - Your case is forwarded to the FBI, who now have cause to
             | turn your life upside down.
             | 
             | This scenario seems entirely plausible to me, given the
             | published information about the system and the ability to
             | generate collisions that look like an arbitrary input
             | image, which is clearly possible as demonstrated in the
             | linked thread. The fact that most of us are unlikely to be
             | targets of this kind of attack is little comfort to those
             | that may be.
             | 
             | [1]: https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX
             | /issue...
        
               | nonbirithm wrote:
               | The problem is that Apple cannot actually see the image
               | at its original resolution because of the supposed
               | liability of harboring CSAM, but being able to retrieve
               | the original image would mean being able to know the
               | complete contents of the rest of its data. To me, it
               | sounds like Apple is trying to make a compromise between
               | having as little knowledge of data on the server as
               | possible and remaining in compliance with the law, but
               | that compromise is impractical to execute.
               | 
               | The law states that if you find an image that's believed
               | to be CSAM, you must report it to the authorities. If
               | Apple's model detects CSAM on the device, sending the
               | whole image to the moderation system for false positives
               | carries the risk of breaking the law, because the images
               | are likely going to be known CSAM, since that's what the
               | database is intended to detect, so they'd be accused of
               | storing it knowingly. Perhaps that's why the thumbnail
               | system is needed.
               | 
               | So why wouldn't Apple store the files unencrypted and
               | scan them when they arrive? That would mean Apple would
               | remove themselves from liability by preventing themselves
               | from gaining knowledge of which images are CSAM or not
               | until they're scanned for, but could still send the
               | original copy of the image with a far lower chance of
               | false positives when something _is_ found. That knowledge
               | or the lack of it about the nature of the image is the
               | crucial factor, and once they believe an image is CSAM
               | they cannot ignore it or stop believing it 's CSAM later.
               | 
               | That question may hold the answer to why Apple attempted
               | to innovate in how it scans for child abuse material,
               | perhaps to a fault.
        
               | ClumsyPilot wrote:
               | "- The human reviewer sees a bunch of thumbnails of naked
               | people whose age is indeterminate but looks to be on the
               | young side."
               | 
               | Given how most companies have barely paid, clueless and
               | PTSD suffering human reviewers, a litany of mistakes is
               | to be expected.
               | 
               | We should expect all the coherence of the twitter's
               | nipple policy, except now it put you in jail or at least
               | ruins your life with legal fees
        
               | endisneigh wrote:
               | Your scenario makes no sense. You can just skip all of
               | the steps and skip to "the FBI, who now have cause to
               | turn your life upside down." If the evidence doesn't
               | matter, they could've just reported you to the FBI for
               | having CP, regardless of whether you have it or not and
               | your point remains the same.
               | 
               | Not to mention your scenario requires someone you trust
               | trying to "get you." If that's true, then none of the
               | other steps are necessary since you're already
               | compromised.
        
               | dont__panic wrote:
               | If your iCloud Photo library contains enough photos to
               | trigger a manual review + FBI report, how does the
               | scenario make no sense?
               | 
               | And as far as your point about "someone you trust trying
               | to 'get you'"... have you ever dated? Ever exchanged
               | naughty photos with somebody (I expect this is even more
               | popular these days among 20-somethings since covid
               | prevented a lot of in-person hookups)? This doesn't seem
               | crazy for a variant of catfishing. I could easily see
               | 4chan posters doing this for fun.
        
             | bawolff wrote:
             | My point is - if you hold that view, then collisions
             | shouldn't matter at all, since if they look the same the
             | correct action is for the person to get thrown in jail.
        
         | cwkoss wrote:
         | > Do people honestly believe a single CSAM flag from an
         | "innocent" image is going to result in someone going to prison
         | in America?
         | 
         | I believe a single image flag could be used as pretext to
         | arrest someone that the government has already deemed an enemy
         | of the state. Ex. Someone like Julian Assange
        
         | zbobet2012 wrote:
         | That's not the route in which this will be exploited, and used
         | at scale.
         | 
         | A corrupt government has to know they want you "to just get
         | you". Instead they will embed a collision in anti-government
         | meme. That collision will flag you, and now they know you
         | harbor doubts and will come get you.
         | 
         | This is why it's a privacy concern. It's no the tech (like you
         | said photo dna's been about forver), it's the scanning of the
         | phone.
        
           | flycaliguy wrote:
           | A corrupt government will also enjoy the "chilling effect"
           | created by people's fear of tainting their phone with illegal
           | images.
        
             | kempbellt wrote:
             | I feel like a corrupt government would _want_ people to
             | trust their phones.
        
               | benjaminpv wrote:
               | Alternatively, a corrupt government might want folks to
               | _dis_ trust their mass market phone such that they can
               | have an individual come along and offer them a
               | 'completely secure and private' alternative[1].
               | 
               | [1]: https://www.pcmag.com/news/fbi-sold-criminals-fake-
               | encrypted...
        
               | tgsovlerkhgsel wrote:
               | IIRC many totalitarian governments, historical and
               | current, made/make their surveillance blatantly obvious,
               | because the chilling effect deterring people is much more
               | valuable than the added intelligence value from keeping
               | people careless.
               | 
               | After all, they want to _suppress_ dissent, and they can
               | 't catch and arrest everyone - it's much more effective
               | if people don't even dare to voice their dissent.
               | 
               | This is also why you see people in situations like the
               | Arab Spring use known-insecure/monitored means of
               | communication, because they realize that the value and
               | power that comes from communicating and finding other
               | likeminded people is worth painting a target on yourself
               | (because you can't succeed without it, and if you
               | succeed, there will be too many to prosecute).
        
           | mrtranscendence wrote:
           | > That collision will flag you, and now they know you harbor
           | doubts and will come get you.
           | 
           | Only if that government has worked out some deal with Apple
           | whereby such an anti-government meme would end in the
           | government being notified accordingly. Don't forget that you
           | need a sufficiently high number of collisions, for one, and
           | that those collisions are audited by Apple before being sent
           | to law enforcement.
        
           | endisneigh wrote:
           | What you're saying is already possible. Where are the
           | examples of this happening? PhotoDNA has existed since 2008.
        
             | biesnecker wrote:
             | Who is scanning your phone right now with PhotoDNA?
        
               | endisneigh wrote:
               | Microsoft, Facebook and Google. More depending on what
               | services you use
        
               | biesnecker wrote:
               | They're not actively scanning your phone, they're
               | actively scanning files you send them.
        
               | KallDrexx wrote:
               | That's not actually answering the question in the GP
               | about why this is different.
               | 
               | Photos people send me to my Android are automatically
               | sent through 3rd parties, either through MMS, Facebook
               | messenger, Google Photos, or One Drive. Photos arriving
               | on my device are almost guaranteed to be uploaded to both
               | OneDrive and Google Photos based on how defaults of
               | Android phones are setup.
               | 
               | So someone could already send hash collisions my way
               | (purposely or inadvertently) and authoritarian
               | governments already have access in their respective
               | clouds (at least China does).
               | 
               | And yet, there are not stories of people being falsely
               | accused of child porn due to PhotoDNA hash collisions.
               | 
               | Why does "on device for apple devices only" change the
               | calculus.
        
               | endisneigh wrote:
               | That's the same as what Apple is going to do - scan right
               | before sending to iCloud
        
               | danuker wrote:
               | The scanning on the device before the upload makes it
               | easier to do surveillance.
        
               | WesolyKubeczek wrote:
               | The same as what Apple _is saying it 's going to do_,
               | there's a difference.
        
               | endisneigh wrote:
               | If you don't trust Apple then they could've already done
               | what you're concerned about before they announced this. I
               | don't really get it. Either you trusted Apple before this
               | and you continue to, or you didn't before, and continue
               | not to. If it's the later, then you shouldn't be using
               | Apple services.
        
               | tgsovlerkhgsel wrote:
               | > If you don't trust Apple then they could've already
               | done what you're concerned about before they announced
               | this.
               | 
               | Not without the risk of it being discovered (either
               | through a leak or because someone analyzes the software
               | on the phone), and then having a much bigger scandal on
               | hand.
        
               | knodi123 wrote:
               | Man, wouldn't you love to be the developer who gets
               | assigned the feature to commit these horrible secret
               | privacy violations with deeply evil ethical problems?
               | 
               | You don't even have to implement the feature. All you
               | need is really good proof that you were asked to, and now
               | your job at Apple is secure, along with a huge raise, for
               | years if not for life. Something commensurate with what
               | you and they both know they would lose in the PR damage
               | and possible EU fines.
        
               | vineyardmike wrote:
               | > possible EU fines
               | 
               | Until the EU makes it a legal requirement. Which they're
               | getting close to
        
               | vineyardmike wrote:
               | > wouldn't you love to be the developer who gets assigned
               | the feature
               | 
               | Also, they'd probably not use a Cupertino developer. I'm
               | sure a dev in a nation with a lot less rights is easier
               | for this sort of work. Find a nation where the
               | protections for employees are worse and good jobs harder
               | to find.
        
               | ipaddr wrote:
               | None of those are scannig your phone. None of them aside
               | from google even sell a phone.
               | 
               | Everyone suspected they were scanning images for this and
               | a number of other things on their services.
        
         | thedingwing wrote:
         | You don't need to be sent to prison to be irreparably harmed by
         | an accusation.
        
           | endisneigh wrote:
           | Ok sure, PhotoDNA has existed since 2008. Where are the
           | instances of people being sent to prison?
        
             | dannyobrien wrote:
             | One of the things that is happening now is that the entire
             | PhotoDNA system is finally coming under the level of
             | oversight that it should have had right from the start.
             | 
             | I can tell you from working in this area that it's possible
             | for someone to have their lives ruined by a misplaced
             | investigation, have that investigation abandoned because
             | they turn out to be obviously innocent, _and_ for that to
             | not be well-known, because people simply would not
             | understand the context.
             | 
             | Before this Apple scandal, if you'd written to your
             | reepresentative or a journalist or an activist group and
             | said "I was framed for child abuse because of computer
             | program that misidentified innocent pictures", they would
             | attach a very low priority to dealing with you or
             | publicising this. And almost all people who have
             | experienced this kind of nightmare really don't want to re-
             | live it in public for some tiny possibility of real justice
             | being served for them, or for others. They just want it to
             | all go away.
        
               | nonbirithm wrote:
               | We certainly have Apple's PR blunder to thank for that,
               | but if PhotoDNA always held that potential for abuse due
               | to its very nature, why did we remain silent for 13
               | years?
               | 
               | Maybe it's because Google and Microsoft and others'
               | policy of security through obscurity actually succeeded
               | in preventing the details of PhotoDNA from coming to
               | light, and it took Apple exposing their hashing model to
               | reverse engineering by including it on the device for
               | people to finally wake up.
        
               | dont__panic wrote:
               | Considering I didn't know about:
               | 
               | - PhotoDNA
               | 
               | - CSAM scanning on cloud photo platforms
               | 
               | - the acronym "CSAM"
               | 
               | Before this whole Apple client-side scanning debacle...
               | seems pretty likely. A lot of privacy-focused people also
               | avoid Google and Microsoft cloud services like the plague
               | and trusted Apple up to this point to protect their
               | privacy. The fact that Apple was (and is) scanning iCloud
               | Photos libraries for CSAM unbeknownst to most of us is
               | just another violation of that trust and shows just how
               | far the "what happens on your iphone, stays on your
               | iphone" privacy marketing extends (read: not past your
               | iphone, and sometimes not even on your iphone).
        
               | nonbirithm wrote:
               | I think the actual issue is that Apple _wasn 't_ scanning
               | enough user data, so the government or the FBI or some
               | other external force was holding them accountable for it
               | out of public view, and Apple was pressured into
               | increasing the amount of scanning they conducted.
               | 
               |  _" U.S. law requires tech companies to flag cases of
               | child sexual abuse to the authorities. Apple has
               | historically flagged fewer cases than other companies.
               | Last year, for instance, Apple reported 265 cases to the
               | National Center for Missing & Exploited Children, while
               | Facebook reported 20.3 million, according to the center's
               | statistics. That enormous gap is due in part to Apple's
               | decision not to scan for such material, citing the
               | privacy of its users."_ [1]
               | 
               | [1] https://www.nytimes.com/2021/08/05/technology/apple-
               | iphones-...
        
             | Dah00n wrote:
             | You are commenting a _lot_ for this many places in the
             | thread. Are you arguing for this system or for Apple? It
             | reads like pro-Apple and doesn 't add anything except "I
             | think it is good, therefore it is good".
        
               | qweqwweqwe-90i wrote:
               | FYI: when you wrote this comment, you had posted 5% of
               | all the comments on this thread.
        
               | kemayo wrote:
               | If you have a point which you feel rebuts a common
               | argument, it seems reasonable to leave that comment in
               | places you see that argument. The alternative is
               | "minority positions should be drowned out", no?
        
               | rootusrootus wrote:
               | It can get a little frustrating to hear so much
               | inaccurate FUD being spread around which detracts from a
               | reasoned discussion on the merits.
        
         | mikepurvis wrote:
         | At least part of the concern is that a hash collision is
         | basically "cause" for Apple to then dump and begin manually
         | (like, with humans) reviewing the contents of your device, all
         | of which will be happening behind the closed doors of a private
         | corporation, outside of any of the usual oversight or innocent-
         | presumption mechanisms that come from it happening through the
         | courts.
         | 
         | That, combined with a (pretty reasonable) expectation that the
         | pool of sus hashes will greatly expand as law enforcement and
         | others begin to understand what a convenient side-step this is
         | for due process.
        
           | shuckles wrote:
           | That is literally the status quo with every cloud service.
           | Apple, unlike the others, has said that they will evaluate
           | you on the basis of what's included in the associated data of
           | your safety voucher, and you can inspect those contents
           | because they're shipped in the client. Facebook, for all I
           | know, might be calculating a child predator likelihood score
           | on my account based on how often I look up my middle school
           | ex-girlfriend on Instagram.
           | 
           | In addition, "pretty reasonable" is an opinion not fact.
           | Where is the evidence that PhotoDNA hashes have been
           | compromised in this way in the fifteen years they've been
           | used?
        
             | stjohnswarts wrote:
             | I don't understand technie people on HN being okay with
             | apple breaching the spirit of the 4th amendment and
             | becoming the FBI agent in your phone. Scanning the stuff in
             | the cloud is one thing but this is crossing a line. I am
             | shedding all my apple hardware over it. If you want to
             | trust them fine but one day it will bite you on the ass.
        
               | shuckles wrote:
               | For ideological consistency, are you dumping every
               | service provider that scans the contents of your account
               | and reports offending data to law enforcement?
        
             | foerbert wrote:
             | I don't think we can just appeal to the status quo here and
             | assume it's acceptable. There's a couple reasons.
             | 
             | First, how many people really understood this previously?
             | Did society at large actually knowingly accept the current
             | state of things, or did it just happen without most people
             | realizing it? Even here on HN where we'd expect to find
             | people way more knowledgeable about it than in general I'm
             | not sure how well known it was about what was actually
             | happening, though I'd assume most would be aware it was
             | possible.
             | 
             | Secondly, there's a significant difference between your own
             | device or Apple's server doing this. On the technical side
             | of things, right now, it might not matter that much since
             | it currently is limited to things you upload to iCloud. But
             | more philosophically, it's your own device being turned
             | against you to check you for criminal behavior. That's very
             | different from somebody else checking up on you after you
             | willingly interact with them.
        
               | nonbirithm wrote:
               | If the problem is a lack of understanding of the status
               | quo, then it isn't fair to criticize Apple alone. People
               | ought to demand answers about the state of server-side
               | scanning from Facebook and Microsoft and everyone else
               | that employs PhotoDNA as well. The most popular article
               | submitted to HN with "PhotoDNA" in the title garnered
               | hardly any interest at all, even though someone there
               | implied that a hash collision might be possible five
               | years in advance.
               | 
               | https://news.ycombinator.com/item?id=11636502
        
               | endisneigh wrote:
               | > But more philosophically, it's your own device being
               | turned against you to check you for criminal behavior.
               | That's very different from somebody else checking up on
               | you after you willingly interact with them.
               | 
               | This literally only works once you willing send photos to
               | iCloud.
        
               | ClumsyPilot wrote:
               | You can't buy a car in EU that doesn't have a sim card.
               | All Tractors have a computer that locks out the machine
               | if it doesn't like something and phones home. Almost
               | every TV on sale is 'smart' and spies on what you are
               | saying. Coffee machines, lights and toasters are now
               | internet connected, and all of them send data to a server
               | that will be scanning for the 'wrong' material. in 10
               | years there will be nowhere to hide.
        
               | simondotau wrote:
               | > _You can 't buy a car in EU that doesn't have a sim
               | card._
               | 
               | Wait, what?
        
               | stjohnswarts wrote:
               | For now. There is no technical hurdle preventing them
               | from scanning everything locally and reporting back.
        
               | shuckles wrote:
               | There wasn't such a hurdle before or in the
               | counterfactual where they built infrastructure to scan
               | iCloud while also keeping iCloud Backups for every
               | device.
        
               | foerbert wrote:
               | Right. I mentioned that. It's still your own device doing
               | it.
               | 
               | It's like announcing to your family member you're going
               | to tell your neighbor you committed a crime and your
               | family member turns you in first. Yeah, you could expect
               | your neighbor to do the same, but are you really not
               | going to feel any differently about the fact it was your
               | family that turned you in?
        
           | Tagbert wrote:
           | What they would be reviewing would be scaled version of the
           | specific photos that triggered the hash alert. It's not a
           | broad fishing expedition. There is no mechanism to start
           | browsing the photos on your phone.
        
           | [deleted]
        
         | trhway wrote:
         | > Do people honestly believe a single CSAM flag from an
         | "innocent" image is going to result in someone going to prison
         | in America?
         | 
         | being on a government maintained secret list of CSAM flagged
         | would probably have a lot of consequences in your life going
         | forward without you knowing about that. It may be just one
         | innocent image which accidentally matched the hash, yet without
         | any procedure to get you off that list or even just to learn
         | whether you're on that list ...
         | 
         | >If some corrupt government wants to get you they don't need
         | this.
         | 
         | it isn't corrupt government which is a threat here. It is a
         | well intended well functioning relatively uncorrupt government
         | which spends a lot of effort to protect its society from
         | terrorism, child porn, sex trafficking, drugs, etc. In case of
         | corrupt government the situation is kind of easier - you can
         | always learn and correct necessary things through the corrupt
         | government officials by paying a corresponding "fee".
        
       | oconnore wrote:
       | If it's so easy to modify NeuralHashes, won't "CSAM networks"
       | just rotate the hashes of their "collections"?
       | 
       | If you can make an innocent picture collide with a CSAM picture,
       | presumably you can also edit a CSAM picture to have a random hash
       | not in the database?
        
         | mrtranscendence wrote:
         | Through chatting and reading threads I get the impression that
         | many folks think that pedophiles are all smart, extremely tech
         | savvy, operating in networks that share information and
         | collaborate to fool law enforcement. There may be some of that,
         | but there are also plenty of Joe Schmoes who download or trade
         | CSAM material without being especially clever about it. I
         | remember when I was a teenager downloading porn I found more
         | than a few probably-illegal photos just by accident*. If you
         | want to find such images you can, it doesn't take an organized
         | "network".
         | 
         | * edit - this was 25 years ago, admittedly, and much of what I
         | downloaded came from randos on AOL. Things might be different
         | now.
        
       | tartoran wrote:
       | I am not exactly sure if this is how this works but it appears to
       | me that all the hashes of your photos get uploaded to a server
       | and was wondering if it is possible to reverse the hashes to be
       | able to deduce what's in each photo hashed..
        
         | ilogik wrote:
         | no and no
        
       | tgsovlerkhgsel wrote:
       | > Apple now has over 1.5 billion users so we are talking about a
       | large pool of users at stake which increases the likelihood of
       | even a low probability event manifesting
       | 
       | This is an extremely good point. If the whole system, end to end,
       | after all safeguards (e.g. human reviewers which can also make
       | mistakes) has a one-in-a-billion chance to ruin a user's life,
       | then statistically, we can expect 1-2 users to have their lives
       | ruined.
       | 
       | What's even worse, when those individuals are facing whatever
       | they're facing, they'll have to argue against the one-in-a-
       | billion odds. If there are jurisdictions where defense lawyers
       | don't get access to the images their client is accused of, and
       | prosecutors and judges don't look at the images but only at the
       | surrounding evidence (which says this person is guilty, with
       | 99.9999999% certainty), Apple may have built a system that's
       | statistically expected to send an innocent person to prison.
        
         | Terretta wrote:
         | They didn't say one in a billion, they said 2 in 2 trillion.
         | 
         | A billion users with a thousand photos each would only be one
         | trillion photos.
         | 
         | So a chance someone inadvertently has to look at a photo and
         | ... not prison, but, "nah, not this photo".
         | 
         | Probability seems rather higher that the global suspicion-less
         | and warrantless search will generate more positive PR results
         | and systemic downstream negative privacy impacts than that
         | chance of a subsequently easily avoided adverse result.
         | 
         | Odds are decent this all turns out seeming to normals like a
         | "no downsides" implementation. If anyone wants to screech,
         | better screech before it all proves perfectly splendid.
        
         | nomel wrote:
         | I think you misunderstand the reporting process. If the
         | threshold is passed, Apple reports the images to NCMEC for
         | review. NCMEC reports it to authorities. So, this would require
         | the failure of three organizations.
        
           | spideymans wrote:
           | It would also require the failure of both the prosecutor and
           | the judiciary to recognize the images as non-CSAM.
        
         | mrtranscendence wrote:
         | > This is an extremely good point. If the whole system, end to
         | end, after all safeguards (e.g. human reviewers which can also
         | make mistakes) has a one-in-a-billion chance to ruin a user's
         | life, then statistically, we can expect 1-2 users to have their
         | lives ruined.
         | 
         | No one will have their lives ruined. If there's a false
         | collision, someone at Apple has to look at the images as a
         | final safeguard. If it's not actually CSAM then it won't ever
         | make it to law enforcement.
        
           | tgsovlerkhgsel wrote:
           | What probability do you ascribe to that reviewer clicking the
           | wrong button, be it out of habit/zoning out (because the
           | system usually shows them true positives), cheating (always
           | clicking "yes" because it's usually correct and allows them
           | to get paid without having to look at horrible images all
           | day), mistake, wrong instructions (e.g. thinking that all
           | images of children, or all porn including adult porn, should
           | be flagged), confusion, or malice?
           | 
           | 1 in 10? 1 in 100? 1 in 1000? Pick a number, you now have an
           | estimate of how much room for error there is in the whole
           | system.
           | 
           | If you consider 0.15 lives ruined on average per year
           | acceptable, and the reviewers have a 1-in-1000 error rate,
           | then the rest of the system has to make less than
           | 1-in-10-million mistakes per year. And I'm pretty sure
           | 1-in-1000 for the reviewers is very, very optimistic, even if
           | you do quality checks, control for fatigue, etc.
           | 
           | On the other hand, hopefully there are some safeguards after
           | the reviewers (e.g. human prosecutors who don't blindly
           | rubber stamp). But my point is: The room for error in
           | anything done at scale that has severe consequences is
           | _extremely_ small.
        
             | robertoandred wrote:
             | Even if Apple's manual review fails, there's still the
             | NCMEC's review. There are several layers before anything
             | goes to law enforcement.
        
               | dhosek wrote:
               | And presumably, at some point even if it makes it to a
               | trial, the accused would be able to point to the flagged
               | images to show that they aren't actually child porn.
               | 
               | And a country in which this isn't possible, is likely one
               | that is going to ruin people's lives just fine without
               | Apple.
        
       | tbenst wrote:
       | I'm much more worried about adversarial attacks rather than
       | accidental false positives.
       | 
       | For example, a year ago I was trying to track down how some
       | unscrupulous photos ended up in my photo library. I finally
       | realized that Telegram had, without my knowing permission, been
       | adding all photos I had received in a crypto discussion group to
       | my Apple photo library.
       | 
       | There seem many ripe opportunities for targeting.
        
       | nanis wrote:
       | It sounds like there is code that enables anyone to compute the
       | perceptual hash for an image. Is this code published somewhere?
        
         | yeldarb wrote:
         | The instructions are here:
         | https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX
         | 
         | This was discussed on HN yesterday here (and a few other front-
         | page stories): https://news.ycombinator.com/item?id=28218391
        
           | nanis wrote:
           | Got it ... Here's the part I was missing:
           | 
           | > You will need 4 files from a recent macOS or iOS build:
           | neuralhash_128x96_seed1.dat
           | NeuralHashv3b-current.espresso.net
           | NeuralHashv3b-current.espresso.shape
           | NeuralHashv3b-current.espresso.weights
           | 
           | So, for clarity, Apple did not publish the model/code for
           | others to be able to use/test etc. Someone found a clever way
           | to convert the stored model into an open format and therefore
           | others with access to those files are able to experiment.
        
       | aaaaaaaaaaab wrote:
       | >By taking advantage of the birthday paradox, and a collision
       | search algorithm that let me search in n(log n) time instead of
       | the naive n^2
       | 
       | n(log n)? I thought the birtday paradox lets you expect
       | collisions after sqrt(n) tries...
        
         | yeldarb wrote:
         | Different things; birthday paradox was testing them against
         | each other, turning it into a sorting problem was the other.
        
       | Grakel wrote:
       | Maybe this is a shot in the arm for anonymous devices. Somebody
       | buy a bunch of GSM devices and resell for cash.
        
       | jeffbee wrote:
       | Following the topic because it's interesting, but I'm bothered by
       | the chain of reasoning necessary to reach these conclusions. How
       | sure are we that the means of dumping the model from iOS and
       | converting it to ONNX results in a faithful representation of
       | what NeuralHash actually does?
        
         | skygazer wrote:
         | FWIW, Apple says that's not even the NN hash function they'll
         | be using. This one has been buried in iOS since 2019, so it may
         | have been a prototype. Although dumping the NN image seems
         | straightforward, so this seems a faithful copy of that older
         | hash function. (There's been a suggestion that it's sensitive
         | to floating point processor implementations, so small hash
         | differences may occur between different CPU architectures.)
        
         | tyingq wrote:
         | I agree in theory, but the burden of proof shouldn't be on
         | outsiders that have no other choice than to extrapolate.
         | Proposing something like this running on an end-user's
         | personally owned device should have a really high bar.
        
       | stickfigure wrote:
       | > it's not obvious how we can trust that a rogue actor (like a
       | foreign government) couldn't add non-CSAM hashes to the list to
       | root out human rights advocates or political rivals. Apple has
       | tried to mitigate this by requiring two countries to agree to add
       | a file to the list, but the process for this seems opaque and
       | ripe for abuse.
       | 
       | If the CCP says "put these hashes in your database or we will
       | halt all iPhone sales in China", what do you think Apple is going
       | to do? Is anyone so naive that they believe the CCP wouldn't
       | deliver such an ultimatum? Apple's position seems to completely
       | ignore recent Chinese history.
        
         | vineyardmike wrote:
         | The only way this matters [today] though is if apple turns it
         | on for all pictures, not just icloud ones. Presumably "Chinese
         | iCloud" already scans uploaded photos cloud side.
         | 
         | Unless the goal is to simply the effort/expense of scanning by
         | making it a client process.
        
           | CheezeIt wrote:
           | > The only way this matters though is if apple turns it on
           | for all pictures,
           | 
           | And there's no way the CCP would order Apple to do that.
        
         | breuleux wrote:
         | Presumably Apple would be afraid that, say, the EU becomes
         | suspicious, issues a court order to obtain the hashes, notices
         | they cannot audit the CCP hashes, pointedly asks "what is
         | this", becomes absolutely livid that their citizens are spied
         | on by a country that is _not_ them, fines Apple out the wazoo,
         | then extradites whoever is responsible and puts them in prison.
         | I mean, China 's not the only player in this. Putting extra
         | hashes to surveil Chinese citizens, yeah, they might do that,
         | but it'd be suicide to put them anywhere else.
        
           | Bjartr wrote:
           | > citizens are spied on by a country that is not them
           | 
           | I thought countries often have under the table agreements
           | with one another to explicitly spy on each others citizens,
           | since its illegal for the country to spy on its own citizens.
           | It's illegal for the other country too, but it's a lot easier
           | to turn a blind eye to it.
        
             | ChemSpider wrote:
             | The EU is not spying on any of its citizen. If you think
             | otherwise, please link to some sources. Otherwise these are
             | baseless rumors.
             | 
             | These "everyone is doing it" statements are nonsense. Not
             | everything is doing it.
        
               | TheSpiceIsLife wrote:
               | You're probably right.
               | 
               | The USA is probably doing it for them.
               | 
               | NSA tapped German Chancellery for decades, WikiLeaks
               | claims
               | 
               | NSA spying row: Denmark accused of helping US spy on
               | European officials
               | 
               | US caught spying on EU 'allies' AGAIN...not like the
               | Europeans will do anything about it
        
               | int_19h wrote:
               | Just one example:
               | 
               | https://en.wikipedia.org/wiki/Federal_Office_for_the_Prot
               | ect...
               | 
               | https://www.spiegel.de/international/germany/infiltrating
               | -th...
        
           | dont__panic wrote:
           | > extradites whoever is responsible and puts them in prison
           | 
           | If we lived in a world where the people who make these kind
           | of decisions for companies were actually accountable in this
           | way, life might be better in a lot of different ways. But
           | sadly we do not.
        
           | riffraff wrote:
           | Seems pretty trivial to have different servers sides per
           | country, and put there a different db.
           | 
           | EU, China, Iran, US: everyone gets to spy on their own
           | children and forbid whatever they want.
        
             | breuleux wrote:
             | The db is encrypted and uploaded to user devices. If each
             | country gets a different db, the payload will be different
             | in each country, which does not make sense if it's all
             | supposed to be CSAM. So Apple would likely just say "these
             | were mandated by the US government for US citizens,"
             | punting the ball in their court, unless they are forbidden
             | to say so, in which case they'll say nothing, but we all
             | know what it means. That's when you know you should change
             | phones and stop using all cloud services, because obviously
             | all cloud services scan for the same thing.
             | 
             | On the flip side, though, at least Apple will have given us
             | a canary. And that's why I don't think Apple will be asked
             | to add these hashes: if the governments don't want their
             | citizens to know what's being scanned server side, pushing
             | the equivalent data to clients would tip their hand. They
             | might just write Apple off as a loss and rely on Google,
             | Facebook, etc.
        
               | dhosek wrote:
               | >That's when you know you should change phones
               | 
               | You'd have to switch to a dumb phone. Assuming you can
               | find one that works on contemporary networks.
        
               | sillysaurusx wrote:
               | This isn't true. The db is blinded. We have no way of
               | knowing what's in it.
               | 
               | It would be trivial to have the same payload on each
               | device, and extract different answers using the matching
               | server side db which varies by country.
               | 
               | Perhaps not trivial, but just short.
        
               | azinman2 wrote:
               | What do you mean blinded? It's already been promised
               | there will be a way to verify the hash db on your own
               | device.
        
               | sillysaurusx wrote:
               | It's blinded in the cryptographic sense. It's a specific
               | term. I would go into detail, but <reasons>.
               | 
               | Suffice to say, unless you provide proof, I am reasonably
               | confident there's no way to verify the hash db doesn't
               | contain extra hashes other than the CSAM hashes provided
               | by the US government. But I've been wrong many times
               | before.
        
               | azinman2 wrote:
               | Well first of all, it's not provided by the US
               | government. It's a non-profit, and Apple has already said
               | they're going to look for another db from another nation
               | and only included hashes that are the union of the two to
               | prevent exactly this kind of attack.
               | 
               | If what you mean by blinded is that you don't know what
               | the source image is for the hash, that's true. Otherwise
               | Apple would just be putting a database of child porn on
               | everyone's phones. You gotta find some kind of balance
               | here.
               | 
               | What do you mean you can't verify it doesn't contain
               | extra hashes? Meaning that Apple will say here are the
               | hashes in your phone, but secretly will have extra hashes
               | they're not telling you about? Not only is this the kind
               | of thing that security researchers will quickly find,
               | you're assuming a very sinister set of features from
               | Apple that they'll only tell you half the story. If that
               | were the case, then why offer the hashes at all? It's an
               | extremely cynical take.
               | 
               | The reality is all of the complaints about this system
               | went from this specific implementation, and then as
               | details get revealed, it's now all about the future
               | hypothetical situations. I'm personally concerned about
               | future regulations, but those regulations could/would
               | exist independently of this specific system. Further,
               | Dropbox, Facebook, Microsoft, Google, etc all have user
               | data unencrypted on their servers and are also just as
               | vulnerable to said legislation. If the argument is this
               | is searching your device, well the current implementation
               | is its only searching what would be uploaded to a server
               | instead. If you suggest that could change to anything on
               | your device due to legislation, wouldn't that happen
               | anyway? And then what is Google going to do... not follow
               | the same laws? Both companies would have to implement new
               | architectures and systems for complying.
               | 
               | I'm generally concerned about the future of privacy, but
               | I think people (including myself initially) have gone too
               | far in losing their minds.
        
               | int_19h wrote:
               | It's a non-profit that is effectively maintained by the
               | US government (funding, special legal status, interaction
               | with government agencies etc).
        
               | conradev wrote:
               | I would just read the document explaining how this works
               | (see "Matching-Database Setup"):
               | https://www.apple.com/child-
               | safety/pdf/CSAM_Detection_Techni...
               | 
               | At no point is anyone besides Apple able to view any
               | NeuralHash hashes from the CSAM database. You can verify
               | the database is the same on all iPhones, but you are not
               | able to look at any of the hashes.
        
               | azinman2 wrote:
               | Right, but perhaps I'm not understanding what the
               | complaint is here.
               | 
               | Is the issue that you want to take a given photo that you
               | believe the CCP or whomever is scanning for, compute a
               | NeuralHash, and then see if that's in the db? Or are you
               | wanting to see if your db is different from other phone's
               | db's? Because I think the later is the one that most
               | people are concerned about.
        
               | azinman2 wrote:
               | Having just read the CSAM summary pointed to by a child
               | comment here, I know have a better understanding of what
               | you meant by blinded. But I don't think that changes any
               | of my points.
        
               | IncRnd wrote:
               | There are many functions to which cryptographic blinding
               | is applied, but they each rely upon multiple parties to
               | compute the function in question. In that way, the input
               | and output are blinded to a single party.
        
               | breuleux wrote:
               | I feel that it's the kind of scheme that requires too
               | much cooperation from too many people and organizations
               | with conflicting incentives. It's possible some countries
               | would not want the hashes from certain other governments
               | in the db _at all_. And then what? I may be wrong, but I
               | also believe we can know how many hashes are in the db,
               | which means that if it contains extra hashes from dozens
               | of governments, it would become suspiciously large
               | relative to how many CSAM images we know exist.
               | Furthermore, in this scenario the db cannot be auditable,
               | so the scheme falls apart as soon as some rogue judge
               | decides to order an audit.
               | 
               | I honestly don't think Apple wants to deal with any of
               | that crap and that they would rather silently can the
               | system and do exactly what everybody else does than place
               | themselves in the line of fire when their own unique
               | trademark system is being abused.
        
               | sillysaurusx wrote:
               | Would they rather deal with the CCP shutting off iPhone
               | sales in China? History has shown that the CCP is willing
               | to do that if it comes down to it. (I remind you that at
               | one time, Google was a primary search engine in China.)
        
             | FabHK wrote:
             | It would be good, I think, if people read Apple's threat
             | assessment before calling it "pretty trivial":
             | 
             | > * Database update transparency: it must not be possible
             | to surreptitiously change the encrypted CSAM database
             | that's used by the process.
             | 
             | > * Database and software universality: it must not be
             | possible to target specific accounts with a different
             | encrypted CSAM database, or with different software
             | performing the blinded matching.
             | 
             | I mean, you can argue that Apple's safeguards are
             | insufficient etc., but at least acknowledge that Apple has
             | thought about this, outlined some solutions, and considers
             | it a manageable threat.
             | 
             | ETA:
             | 
             | > Since no remote updates of the database are possible, and
             | since Apple distributes the same signed operating system
             | image to all users worldwide, it is not possible -
             | inadvertently or through coercion - for Apple to provide
             | targeted users with a different CSAM database. This meets
             | our database update transparency and database universality
             | requirements.
             | 
             | > Apple will publish a Knowledge Base article containing a
             | root hash of the encrypted CSAM hash database included with
             | each version of every Apple operating system that supports
             | the feature. Additionally, users will be able to inspect
             | the root hash of the encrypted database present on their
             | device, and compare it to the expected root hash in the
             | Knowledge Base article. That the calculation of the root
             | hash shown to the user in Settings is accurate is subject
             | to code inspection by security researchers like all other
             | iOS device-side security claims.
             | 
             | > This approach enables third-party technical audits: an
             | auditor can confirm that for any given root hash of the
             | encrypted CSAM database in the Knowledge Base article or on
             | a device, the database was generated only from an
             | intersection of hashes from participating child safety
             | organizations, with no additions, removals, or changes.
             | Facilitating the audit does not require the child safety
             | organization to provide any sensitive information like raw
             | hashes or the source images used to generate the hashes -
             | they must provide only a non-sensitive attestation of the
             | full database that they sent to Apple.
             | 
             | [1] https://www.apple.com/child-
             | safety/pdf/Security_Threat_Model...
        
             | knodi123 wrote:
             | Yeah, but if they tell us they're doing that, then it's
             | pretty obvious what they're up to. And if they don't tell
             | us they're doing that, but do it anyway - then they have to
             | perpetually pay every developer involved in that upgrade
             | enough money to keep their mouth shut indefinitely -
             | knowing that the developers know that APPLE knows how much
             | they'd lose in fines if they got caught. Which is an
             | unreasonably large liability, IMO.
        
               | vineyardmike wrote:
               | > if they tell us they're doing that, then it's pretty
               | obvious what they're up to.
               | 
               | didn't they already say the DB would be nation dependent?
        
           | c3534l wrote:
           | I thought it was already common knowledge that China puts in
           | different hardware backdoors for computers destined to
           | different countries. I remember a while back a news story
           | where China accidentally shipped a box of phones backdoored
           | for China into the US.
        
           | dgellow wrote:
           | I think that you overestimate the EU reaction. Every few
           | years we learn that our Europeans leaders and some citizens
           | have been again spied by foreign powers, such as the US, and
           | absolutely nothing ever happened.
        
             | azinman2 wrote:
             | A friend in the military told me years ago France was the
             | number one hacker of the US gov. It goes both ways.
             | 
             | This may have shifted over time as China, Russia, NK, Iran
             | increase their attacks, but it doesn't diminish the fact
             | that the EU is also hacking the US without repercussions.
        
             | FabHK wrote:
             | > I think that you overestimate the EU reaction.
             | 
             | Great (depressing) Twitter account: Is EU Concerned?
             | @ISEUConcerned Very, deeply, strongly, seriously, gravely,
             | extremely, unprecedentedly
             | 
             | https://twitter.com/ISEUConcerned
        
             | breuleux wrote:
             | The US is an ally and it is somewhat harder to punish a
             | nation state than a company. Why would Apple take the risk?
             | China can't exactly reveal that they are banning an
             | American company for not spying on American citizens, and
             | it's not clear what convincing pretext they could provide
             | instead, so I don't think they would actually go through
             | with a ban and Apple would probably just call their bluff.
        
         | tyingq wrote:
         | >(like a foreign government)
         | 
         | It seems like it wouldn't take that. If you can generate a
         | colliding pair of images, you could probably create a pair
         | where one of the images might get attention with child porn
         | groups and thus, shared around enough to end up in the CSAM
         | database. And where the other was innocuous.
        
         | true_religion wrote:
         | Of course Apple would do it. They'd look like fools for saying
         | they want to stop CP, then refusing the listen to the
         | government of nearly a billion people when it says "Please ban
         | this newly produced material".
         | 
         | At best, they would look biased. At worst, they would be
         | sending a signal that they don't care about Chinese children.
         | 
         | You wouldn't even need government strong arming, mobs of
         | netizens would happily tear Apple down.
        
         | bsdetector wrote:
         | > If the CCP says "put these hashes in your database or we will
         | halt all iPhone sales in China", what do you think Apple is
         | going to do?
         | 
         | Or maybe China already said "put in this CSAM check or you
         | can't make or sell phones in China".
         | 
         | Since Apple's position is contrary to their previous privacy
         | policy and doesn't seem to make a lot of sense, it's quite
         | possible extortion already happened (and not necessarily by
         | China).
        
           | ballenf wrote:
           | I'd honestly believe such pressure came from US domestic
           | intelligence or law enforcement agencies just as easily as
           | from China.
        
             | adventured wrote:
             | It wouldn't specifically be from domestic intelligence, it
             | would be from a powerful member of Congress with a
             | relationship to Apple (specifically the board/management),
             | acting as a go-between that would try to politically lay
             | out the situation for them.
             | 
             | Hey Apple, we can either turn up the anti-trust heat by a
             | lot, or we can turn it down, which is it going to be?
             | Except it would be couched in intellectually dishonest
             | language meant to preserve a veneer that the US Government
             | isn't a violent, quasi-psychotic bully ready to bash your
             | face in if you don't do what they're asking.
             | 
             | The interactions with intelligence about the new program
             | would begin after they acquiesce to going along.
             | 
             | It's too easy. There's an extraordinary amount of wilful
             | naivety in the US about the nature of the government and
             | its frequent power abuses (what it's willing to do),
             | despite the rather comically massive demonstration of said
             | abuses spanning the entire post WW2 era. Every time it
             | happens the wilfully naive crowd feigns surprise.
        
         | FabHK wrote:
         | Look, if the government tells Apple to do something, then Apple
         | can push back, but then has to do it or pull out of the
         | country. That's the way it was, and is.
         | 
         | Now, what has _actually_ changed? The two compelling push-back
         | against a country 's demands for more surveillance etc. are:
         | 
         | a) it is not technically feasible (eg, wiretapping E2EE chats),
         | and
         | 
         | b) it is making the device insecure vis-a-vis hackers and
         | criminals (eg, putting in a backdoor)
         | 
         | The interesting question is: Have these defences against
         | government demands been weakened by this new technology? Maybe
         | they have, that would be my gut feeling. But it is not enough
         | to assert it, one must show it.
        
         | Xamayon wrote:
         | At this point, with all the easily producible collisions, the
         | Gov't could just modify some CSAM images to match the hash of
         | various leaked documents/etc they want to track. Then they
         | don't even have to go thru special channels. Just submit the
         | modified image for inclusion normally! (Not quite that simple,
         | as they would still need to find out about the matches, but
         | maybe that's where various NSA intercepts could help...)
        
           | Natsu wrote:
           | Not quite, a CSAM hash match triggers another match within
           | Apple to avoid false positives and then a human review. It
           | wouldn't be trivial for them to extract matches out of that,
           | and they'd only be able to track files they already know the
           | contents for.
           | 
           | I would think they could more easily just make your phone
           | carrier install a malware update on your phone, rather than
           | jumping through all of these hoops to get them access they
           | already have.
           | 
           | Plenty of data is leaking out of people's phones already as
           | can be seen from, e.g. the Parler hack.
        
             | Xamayon wrote:
             | I tried to address the issue with finding out about the
             | match at the end of my comment. I agree it's not exactly
             | practical without other serious work to intercept the
             | alerts, have 'spies' in the apple review process, etc. Much
             | easier ways would exist at that point, but it's somewhat
             | amusing (in a horrifying way) that some bad actor could in
             | theory use modified CSAM as a way to detect the likely
             | presence of non CSAM content using generated collisions.
        
           | nodamage wrote:
           | All of the major tech companies already scan images uploaded
           | to their services so isn't this already theoretically
           | possible now? How is the situation changed by Apple using on-
           | device scanning instead of cloud scanning (considering these
           | images were going to be uploaded into iCloud anyway).
        
         | godelski wrote:
         | Or why not "Hey Vietnam, Pakistan, Russia, etc put these hashes
         | into your database please and thanks." I mean the CCP has
         | allies that are also authoritarian. Why would they have to
         | threaten Apple directly? This is also how you get past the
         | Apple human verification. Just pay those Apple workers to click
         | confirm.
        
           | adventured wrote:
           | > Why would they have to threaten Apple directly?
           | 
           | They'd do it directly because it's expedient and useful. If
           | you're operating such a sprawling authoritarian regime, it's
           | important to occasionally make a show of your power and
           | control, lest anyone forget. The CCP isn't afraid of Apple,
           | Apple is afraid of the CCP. Lately the CCP has been on a
           | rather showy demonstration of its total control. If you're
           | them it's useful to remind Apple from time to time that
           | they're basically a guest in China and can be removed at any
           | time. You don't want them to forget, you want to be
           | confrontational with Apple at times, you want to see their
           | acknowledged subservience; you're not looking to avoid that
           | power confrontation, the confrontation is part of the point.
           | 
           | And the threat generally isn't made, it's understood. The CCP
           | doesn't have to threaten in most cases, Apple will understand
           | ahead of time. What gets made initially is a dictate (do
           | this), not the threat. If something unusual happens, such as
           | with Didi's listing on the NYSE against Beijing's wishes
           | (whereas ByteDance did the opposite and kowtowed, pulling
           | their plans to IPO), then, given that Didi obviously
           | understood the confrontation risk ahead of time and tested
           | you anyway, then you punish them. If that still isn't enough,
           | you take them apart (or in the case of Apple, remove them
           | from the country).
        
             | godelski wrote:
             | I'm just saying that there is another avenue. To be clear,
             | this isn't a "vs" situation. It means that they have
             | multiple avenues.
             | 
             | To also clarify, the avenue of extortion isn't open to
             | every country. But the avenue I presented is as long as
             | that country has an ally. I'm not aware of any country not
             | having an ally, so I presume that this avenue is pretty
             | much open to any country.
        
         | batch12 wrote:
         | From the post yesterday discussing collisions, it doesn't seem
         | outside the realm of possibility to take an image of CSAM and
         | modify it until it has the hash that matches another target
         | wanted either.
        
         | JohnJamesRambo wrote:
         | Tank Man from Tiananmen Square image comes to mind.
        
         | fossuser wrote:
         | The CCP already runs iCloud themselves in country so this is a
         | bit irrelevant. (Though I think this kind of capitulation to
         | authoritarian countries is wrong, personally:
         | https://zalberico.com/essay/2020/06/13/zoom-in-china.html)
         | 
         | This policy really needs to be compared to the status quo
         | (unencrypted on cloud image scanning). When you compare in
         | transit client side hash checks that allow for on cloud
         | encryption and only occur when using the cloud it's hard to see
         | an argument against it that makes sense given the current
         | setup.
         | 
         | The abuse risk is no higher than the status quo and enabling
         | encryption is a net win.
        
           | clnhlzmn wrote:
           | Saying this is no worse than the status quo isn't a good
           | argument. The status quo is the problem.
        
         | pbhjpbhj wrote:
         | I think one of the other stories on this talked about
         | "watermarking" in order to create a hash collision. So it need
         | not be a non-CSAM image, a TLA could just alter an image to
         | make it collide with a file they want to track, other countries
         | would agree that file's hash should be in the hash list and
         | bingo: Apple presumably provide the TLA with a list of devices
         | holding that file.
         | 
         | ?
        
           | duskwuff wrote:
           | Except that there's a threshold involved. A single matching
           | file doesn't trigger an investigation; it takes multiple
           | (10+, maybe more?) matches to do that.
        
             | kemayo wrote:
             | In the interview Craig Federighi gave on Friday, he said
             | 30.[1] I have no idea if he was just throwing a number out
             | or if that's the actual threshold.
             | 
             | [1]: https://tidbits.com/2021/08/13/new-csam-detection-
             | details-em...
        
               | duskwuff wrote:
               | Either way, it's high enough that adding a single file to
               | the set wouldn't be a useful way of finding people who
               | have that file. One attack I can imagine would be to add
               | a whole set of closely related photos (e.g. images posted
               | online by a known dissident) to identify the person who
               | took them, and even that would be a stretch.
        
         | slg wrote:
         | I would expect Apple to say the same thing if the CCP proposed
         | a system of scanning devices last month. I fail to see how this
         | system changes the calculus for how Apple will deal with
         | authoritarian governments.
         | 
         | If Apple could stand up to them before this system, why can't
         | they stand up to them with this system?
        
           | foerbert wrote:
           | The difference is the ease with which they can demur. Before,
           | it would be a whole heck of a lot of new, additional work.
           | They also have the problem of actually introducing it without
           | being noticed, or having to come up with some cover for the
           | new behavior.
           | 
           | Now? Well now it's real simple. It will even conveniently not
           | expose the actual images it's checking for. Apple now has
           | significantly less ability to rationally reject the request
           | based on the effort and difficulty of the matter.
           | 
           | Even Apple's own reasons to reject the request have
           | decreased. It would have legitimately cost them more to
           | fulfill this request before, even if China did want to play
           | hardball. Now, they have greater incentive to go along with
           | it.
        
             | mrtranscendence wrote:
             | > The difference is the ease with which they can demur.
             | 
             | If Apple can be cowed by China into adding fake CSAM hashes
             | by threat of banning iPhone sales, they could be cowed to
             | surveil Chinese citizens in the search for subversive
             | material. It's no skin off China's back if it's harder for
             | Apple -- they'll either make the demand or they won't. This
             | changes basically nothing.
        
               | foerbert wrote:
               | I think that's a too simplistic view.
               | 
               | It's kinda true, but ignores how humans really work.
               | Apple will be pushed around to a degree, but there will
               | be limits. The harder the ask now the less China can ask
               | later. And the more Apple can protest about the
               | difficulty and impossibility and other consequences they
               | will face, the more likely China is to back off.
               | 
               | Both sides want to have their cake and eat it too, and
               | will compromise to make it basically work. But if China
               | makes demands so excessive they get Apple to cut ties,
               | China loses. Apple has the money, demand, customer
               | loyalty, and clout to make things real uncomfortable.
               | Apple would have to pay a hefty price, but if any company
               | can do it... it's them.
               | 
               | So I don't think it's fair to say that no matter what
               | China will just demand whatever whims strike it each day
               | and everybody will play ball or gtfo. That just isn't how
               | shit works.
        
               | mthoms wrote:
               | Apple does have _some_ degree of leverage over the CCP
               | too. I realize its not possible today... but in 3-5
               | years, Apple may be in a position to move some /all of
               | their manufacturing elsewhere.
               | 
               | The direct job losses are one obvious problem for the CCP
               | but a company like Apple saying "We're moving production
               | to Taiwan/Vietnam/US because of security risks in China"
               | would be catastrophic for the (tech) manufacturing
               | industry as a whole in China. No sane Western based CEO
               | will want to be seen taking that security gamble.
               | 
               | Do I think Apple would do that and forgo the massive
               | Chinese smartphone market? That's another story.
        
             | slg wrote:
             | As far as I'm aware, this system is not new. It is only
             | moving from the cloud to the local device. If the cloud was
             | already compromised, which it seems like it would be in
             | your logic since all the same reasoning applies, I don't
             | understand the complaints about it moving locally.
             | 
             | In my mind there are two possible ways to view this.
             | 
             | We could trust Apple last month and we can trust them
             | today.
             | 
             | We couldn't trust Apple last month and we can't trust them
             | today.
             | 
             | I don't understand the mindset that we could trust Apple
             | last month and we can't trust them today.
        
               | colpabar wrote:
               | > _It is only moving from the cloud to the local device._
               | 
               | But isn't that exactly why this is such a big deal? It
               | sets a precedent that it's ok that devices are scanning
               | your local device for digital contraband. Sure, right now
               | it's only for photos that are going to be uploaded to
               | iCloud anyway. But how long before it scans everything,
               | and there's no way to opt out?
               | 
               | I don't see this as so much a question of apple's
               | trustworthiness, I see it as a major milestone in the
               | losing battle for digital privacy. I don't think it will
               | be long before these systems go from "protecting children
               | from abuse" total government surveillance, and it's
               | particularly egregious that it's being done by apple,
               | given their previous "commitment to privacy".
        
               | slg wrote:
               | >But how long before it scans everything, and there's no
               | way to opt out?
               | 
               | Do we think this is detectable? If yes, then why worry
               | about it if we will know when this switch is made? If
               | not, why did we trust Apple that this wasn't happening
               | already?
               | 
               | That is the primary thing I don't understand, this fear
               | rests on an assumption that Apple is a combination of
               | both honest and corrupted. If they are honest, we have no
               | reason to distrust what they are saying about how this
               | system functions or will function in the future. If they
               | were corrupted, why tell us about this at all?
        
               | stjohnswarts wrote:
               | You're viewing trust in apple as a binary choice. It is
               | not. Trust is a spectrum like most things. You need to
               | get away from that digital thinking. It's the whole
               | reason we have to challenge government and be suspicious
               | of it. It's the same with companies.
        
               | slg wrote:
               | I view trust more as a collection of binary choices than
               | one single spectrum. Do I trust Apple to do X? There is
               | only two possible answers to that (or I guess three if we
               | include "I don't know"). If the answer isn't binary, then
               | X is too big.
               | 
               | In this instance the specific question is "Do I trust
               | Apple to be honest about when they scan our files?". I
               | don't know why this news would change the answer to that
               | question.
        
               | sigmar wrote:
               | Are we going to need to reverse engineer every single
               | Apple update to make sure the feature hasn't creeped into
               | non-iCloud uses? Is the inevitable Samsung version of
               | this system going to be as privacy-preserving? How are we
               | sure the hash list isn't tainted? All of these problems
               | are solved by one principle: Don't put the backdoor code
               | in the OS to begin with.
        
               | slg wrote:
               | >Are we going to need to reverse engineer every single
               | Apple update to make sure the feature hasn't creeped into
               | non-iCloud uses?
               | 
               | Did we do this already? If not, why did we assume this
               | didn't exist previously?
               | 
               | >Is the inevitable Samsung version of this system going
               | to be as privacy-preserving?
               | 
               | I don't see how this is Apple's fault. Be angry at
               | Samsung for their system. I don't blame Snap if I have a
               | problem with Instagram Stories.
               | 
               | >How are we sure the hash list isn't tainted?
               | 
               | How did we know the hash list wasn't tainted when doing
               | this comparison on Apple's servers?
               | 
               | >All of these problems are solved by one principle: Don't
               | put the backdoor code in the OS to begin with.
               | 
               | Apple controls the hardware, software, and cloud
               | services. The backdoor didn't need to exist in the OS for
               | there to be a backdoor.
        
               | sharken wrote:
               | At this point it will be the OS without on-device
               | scanning that wins for me.
               | 
               | It's amazing that Apple is still thinking about moving
               | this idea forward, but they must think they are
               | untouchable and can do what they want.
               | 
               | And it adds insult to injury that two very different
               | images can generate the same NeuralHash value.
        
               | colpabar wrote:
               | It feels like you're viewing this as a purely
               | hypothetical question and ignoring reality. No company is
               | 100% good or bad, and it doesn't make any sense to force
               | all possible interpretations into good/bad.
               | 
               | > _If not, why did we trust Apple that this wasn 't
               | happening already?_
               | 
               | I do not trust Apple. I don't really trust any major tech
               | company, because they put profit first, and everything
               | else comes second. I believe that a company as large as
               | apple is already colluding with government(s) to surveil
               | people, because if a money-making organization is offered
               | a government contract that involves handing over already
               | collected data, and it was kept secret by everyone
               | involved, why would they refuse? I know that's very
               | cynical, but I can't see it any other way.
               | 
               | But that's beside the point, which is that what apple is
               | doing is paving the way for normalization of mass
               | government surveillance on devices that we're supposed to
               | own.
               | 
               | > _If they were corrupted, why tell us about this at
               | all?_
               | 
               | So that we can all get used to it, and not make a big
               | fuss when google announces android will do the same
               | thing. It's much easier to do things without needing to
               | keep them a secret. This is in no way only about apple,
               | they're just breaking the ice so to speak.
        
               | slg wrote:
               | >I do not trust Apple. I don't really trust any major
               | tech company, because they put profit first, and
               | everything else comes second.
               | 
               | Then you should have never been using a closed system
               | like Apple in which they had control over every aspect of
               | it. That is my fundamental point. I'm not saying you
               | should trust Apple. I am saying this shouldn't have
               | changed your opinion on Apple.
               | 
               | >So that we can all get used to it, and not make a big
               | fuss when google announces android will do the same
               | thing. It's much easier to do things without needing to
               | keep them a secret. This is in no way only about apple,
               | they're just breaking the ice so to speak.
               | 
               | I just need more evidence before I believe a global
               | conspiracy that requires the coordination between both
               | adversarial governments and direct business competitors.
        
               | kook_throwaway wrote:
               | >It is only moving from the cloud to the local device.
               | 
               | That's the point. Yesterday someone posted a fantastic
               | TSA metaphor where they are doing the same scans and
               | patdowns but with agents permanently stationed in the
               | privacy of your home where they pinkie promise it will
               | only be before a trip to the airport and only checking
               | the bags you will be flying with.
        
               | kevin_thibedeau wrote:
               | They are crossing a property boundary:
               | 
               | You know food poisoning is dangerous and you'll be safer
               | with a food taster to make sure nothing you eat is
               | spoiled. I'll just help myself to your domicile and eat
               | your food to make sure it's all safe. I already made a
               | copy of your keys to let myself in. It's for your own
               | good.
        
           | vorpalhex wrote:
           | Err, Apple allows the CCP to scan iCloud already, and did so
           | willingly and actively.
        
           | tines wrote:
           | Tin-foil hat time: Who's to say that they could stand up to
           | them before this system? The system itself could have been
           | proposed by the CCP in the first place. I'll take my hat off
           | now.
        
             | [deleted]
        
           | cmelbye wrote:
           | Apple doesn't stand up to authoritarian governments. They
           | give direct access of their systems to the US and Chinese
           | governments already.
        
             | 99mans wrote:
             | This. We've known Apple is backdoored since at least 2013
             | with Snowden revelations. Why are people still debating
             | this nearly a decade later? Apple products ARE NOT PRIVATE.
        
         | orhmeh09 wrote:
         | Could you provide specific evidence that China has and would do
         | this? I've a hard time recalling any specific cases. Maybe
         | nation-states do this kind of thing, but I'm only aware of the
         | countless times the United States has done this. What's the
         | recent history?
        
           | [deleted]
        
           | waz0wski wrote:
           | https://www.reuters.com/article/us-china-apple-icloud-
           | insigh...
           | 
           | https://support.apple.com/en-us/HT208351
        
             | orhmeh09 wrote:
             | Where do you think your iCloud keys are stored?
             | Switzerland?
        
             | dotcommand wrote:
             | > https://www.reuters.com/article/us-china-apple-icloud-
             | insigh...
             | 
             | Apple has to store data of chinese citizens in china? And
             | Apple has to adhere to chinese laws in china? How insane.
             | 
             | It's crazy how deluded the "CCP crowd" are. Apparently, the
             | "CCP crowd" thinks companies are allowed to do business in
             | another country and not abide by their laws.
             | 
             | Are you going to go insane since the EU requires tech
             | companies to store EU citizens data within the EU?
             | 
             | https://blogs.microsoft.com/eupolicy/2021/05/06/eu-data-
             | boun...
             | 
             | You would think most countries would demand their citizens
             | data be stored locally.
        
           | willis936 wrote:
           | Do you have access to this webpage?
           | 
           | https://en.wikipedia.org/wiki/1989_Tiananmen_Square_protests
        
             | orhmeh09 wrote:
             | What does this have to do with forcing corporations' hands?
        
               | willis936 wrote:
               | If you have access: scroll to the "Contemporary Issues"
               | section. Further reading linked in that section:
               | 
               | https://en.wikipedia.org/wiki/Censorship_in_China
               | 
               | https://en.wikipedia.org/wiki/Overseas_censorship_of_Chin
               | ese...
               | 
               | https://en.wikipedia.org/wiki/Internet_censorship_in_the_
               | Peo...
        
           | tyingq wrote:
           | That China would use heavy handed tactics to coerce a US tech
           | company?
           | 
           | Google for "Operation Aurora"
           | 
           | Also see: https://www.reuters.com/article/us-china-apple-
           | icloud-insigh... for a very similar situation to the one
           | described up-thread.
        
             | orhmeh09 wrote:
             | Assuming you are American -- where do you think your iCloud
             | keys are stored? You do know Apple cooperates with US LE
             | and intelligence? This is a nothing hamburger.
        
               | tyingq wrote:
               | I concede that there are overlapping issues there. But if
               | you're saying there aren't places China goes with this
               | sort of info that's different from the US, I don't think
               | any debate would change your mind.
        
               | orhmeh09 wrote:
               | So far I have no specific reason to think China goes
               | places that are as deeply consequential and chilling than
               | the US. What's Assange up to these days?
        
               | voltaireodactyl wrote:
               | > So far I have no specific reason to think China goes
               | places that are as deeply consequential and chilling than
               | the US.
               | 
               | I would be intrigued to hear about the Uyghurs thoughts
               | on the matter, as well as those of Assange.
        
         | tablespoon wrote:
         | >> it's not obvious how we can trust that a rogue actor (like a
         | foreign government) couldn't add non-CSAM hashes to the list to
         | root out human rights advocates or political rivals. Apple has
         | tried to mitigate this by requiring two countries to agree to
         | add a file to the list, but the process for this seems opaque
         | and ripe for abuse.
         | 
         | > If the CCP says "put these hashes in your database or we will
         | halt all iPhone sales in China", what do you think Apple is
         | going to do? Is anyone so naive that they believe the CCP
         | wouldn't deliver such an ultimatum? Apple's position seems to
         | completely ignore recent Chinese history.
         | 
         | Apple policy << Local laws and regulations. It's very hard to
         | believe their policy is anything less than a deliberate PR
         | smokescreen meant to disarm critics, because it has so many
         | holes.
         | 
         | Edit: just thought of another way Apple's policy could be
         | _easily_ circumvented and therefore cannot be regarded as a
         | serious proposal: get two countries to collaborate to add
         | politically sensitive hashes to the list (e.g. China and North
         | Korea, or China and Cambodia). That doesn 't even require Apple
         | to be coerced.
        
         | pharrington wrote:
         | Apple's CSAM detection has nothing to do with this. The vector
         | for an authoritarian government getting blacklists into tech is
         | that government telling the tech vendor "ban this content. We
         | don't care how."
        
         | yongjik wrote:
         | These scenarios sound rather like "the wrong side of airlock"
         | stories[1]. Why would China go through an elaborate scheme with
         | fake child-porn hashes, when it can already arrest these people
         | on made-up charges, and simply tell Apple to provide the
         | private key for their phones, so that they can read and insert
         | whatever real/fake evidences they want?
         | 
         | [1] I'm stealing the expression from this excellent article:
         | https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...
        
           | thereddaikon wrote:
           | They wouldn't. They would force apple to add hashes to things
           | that the CCP doesn't like such as winnie the pooh memes and
           | use turn Apple's reporting system into yet another tool to
           | locate dissidents. How would Apple know any different. Here
           | are some hashes, they are for CSAM trust us. They built a
           | framework where they will call the cops on you for matching a
           | hash value. Once governments start adding values to the
           | database they have no reasonable way of knowing what images
           | those actually relate to. Apple themselves said they designed
           | it so you couldn't derive the original image from the hash.
           | They are setting themselves up to be accessory to killing
           | political dissidents.
        
           | stickfigure wrote:
           | Who said anything about fake porn hashes? China can just say
           | to Apple: _We want everyone whose phones contain XYZ
           | subversive content._
           | 
           | Don't like it? Don't sell phones. Apple will cave.
           | 
           | Note that you don't even have to arrest everyone. The fear is
           | enough to prevent thoughtcrime.
        
             | mrtranscendence wrote:
             | Agreed. If China can force Apple to do almost anything by
             | threatening to ban iPhone sales, why bother with fake CSAM
             | hashes? That just adds an extra step. It's not like the
             | Chinese government needs to take pains to trick anyone
             | about their attitude toward "subversive" material.
        
               | tshaddox wrote:
               | Exactly. Apple can already ship literally any conceivable
               | software to iPhones. Do people really think their plan
               | was to sneak functionality into this update and then
               | update the CSAM database later, and they would have
               | gotten away with it if it weren't for the brilliant
               | privacy advocates pointing out that this CSAM database
               | could be changed over time? That's pretty ludicrous. If
               | the Chinese government wanted to (and thought it had
               | sufficient leverage over Apple), they could literally
               | just tell Apple to issue a software update that streams
               | all desired private data to Chinese government servers.
        
               | giantrobot wrote:
               | > they could literally just tell Apple to issue a
               | software update that streams all desired private data to
               | Chinese government servers.
               | 
               | Uh...this already happened. [0][1]
               | 
               | [0] https://www.macrumors.com/2021/05/17/apple-security-
               | compromi...
               | 
               | [1] https://support.apple.com/en-us/HT208351
        
               | tshaddox wrote:
               | Not quite. Those are still ostensibly servers located in
               | China but not directly controlled by the government
               | _(edit: apparently the hosting company is owned by
               | Guizhou provincial government)_. But yes, this is
               | precisely my point. Any slippery slope argument about
               | Apple software on iPhones is equivalent to any
               | conceivable slippery slope argument about Apple software
               | on iPhones. If you 're making one of these arguments,
               | you're actually just arguing against Apple having the
               | ability to issue software updates to iPhones (and by all
               | means, make that argument!).
        
               | giantrobot wrote:
               | China's laws are such that there's no need for them to
               | obtain a warrant for data housed on servers of Chinese
               | companies. Not only do they not need a warrant but
               | companies are _required_ to facilitate their access.
               | While the servers aren 't _controlled_ by the Chinese
               | government, government law enforcement and intelligence
               | agencies have essentially free access to that data.
        
               | AussieWog93 wrote:
               | > ostensibly servers located in China but not directly
               | controlled by the government
               | 
               | "ostensibly" is the key word there. If the datacenter is
               | physically located in China, then there's a CCP official
               | on the board of the company that controls it.
        
               | noptd wrote:
               | So your argument boils down to since Apple can already
               | install software without us knowing, we shouldn't worry
               | about a new client-side system that makes it
               | substantially easier for nation states to abuse? I don't
               | find that argument the least bit compelling.
        
               | tshaddox wrote:
               | I'm not saying that we shouldn't be concerned with Apple
               | actually launching things that are bad. I'm saying we
               | shouldn't make arguments of the form "this isn't bad yet,
               | but they could change this later to make it bad." Because
               | obviously they can change anything later to be bad. If
               | the system as currently described is a violation of
               | privacy, or can be abused by governments, etc. then just
               | make that argument.
        
               | sangnoir wrote:
               | > why bother with fake CSAM hashes?
               | 
               | Because Apple has already built that functionality, and
               | it exists? What alternative dragnet currently exists to
               | identify iOS users who possess certain images? This would
               | be code reuse.
        
               | nodamage wrote:
               | This situation reminds me a lot of the controversy around
               | Google's changes to code signing for Play Store apps.
               | (https://news.ycombinator.com/item?id=27176690)
               | 
               | In both cases people are stretching to come up with
               | hypothetical scenarios about how these systems could be
               | abused by a government ("they could force Apple to insert
               | non-CSAM hashes into their database" or "they could force
               | Google to insert a backdoor into your app") while
               | completely ignoring the elephant in the room: if a
               | government wanted to do these things, _they already have
               | the capability to do so_.
               | 
               | If your concern is that a government might force Apple or
               | Google to do X or pull product sales in their country,
               | whether Apple performs on-device CSAM scanning vs
               | scanning it on their servers, or whether Google signs
               | your app vs you signing it doesn't materially change
               | anything about that concern.
        
           | ixfo wrote:
           | Just so you know, the original source of the expression would
           | be The Hitchhiker's Guide to the Galaxy.
           | 
           | VOGON GUARD: I'll mention what you said to my aunt.
           | 
           | [Airlock door closes and locks]
           | 
           | FORD: Potentially bright lad, I thought.
           | 
           | ARTHUR: We're trapped now, aren't we?
           | 
           | FORD: Er... Yes, we're trapped.
           | 
           | ARTHUR: Well didn't you think of anything?
           | 
           | FORD: Oh Yes.
           | 
           | ARTHUR: Yes?
           | 
           | FORD: But, unfortunately, it rather involved being on the
           | other side of the airtight hatchway they've just sealed
           | behind us.
        
             | IncRnd wrote:
             | Maybe, but it probably stretches farther back than that,
             | maybe even to before sliced bread or cool beans. Ten years
             | before The Hitchhiker's Guide there was a robot, HAL, who
             | woudn't open the airlock for a particular astronaut.
        
           | giantrobot wrote:
           | China or any government adding collisions would be to use
           | Apple's system as a dragnet to find users possessing the
           | offending images.
           | 
           | The way it would work is the government in question would
           | submit legitimate CSAM but modified to produce a collision
           | with a government target image. Looking at the raw image (or
           | a derivative) a reviewer at Apple or ICMEC would see a CSAM
           | image. The algorithm would see the anti-government image. So
           | Apple scans Chinese (or whoever) citizens libraries, finds
           | "CSAM" and reports them to ICMEC which then reports them to
           | the government in question.
           | 
           | Every repressive government and some notionally liberal
           | governments _will_ eventually do this. It likely is already
           | happening with existing PhotoDNA systems. The difference is
           | that 's being used by explicit sharing services where Apple's
           | new system will search for any photo in a user's library
           | regardless of it being "shared" explicitly.
        
             | vmladenov wrote:
             | > So Apple scans Chinese (or whoever) citizens libraries,
             | finds "CSAM" and reports them to ICMEC which then reports
             | them to the government in question.
             | 
             | If Apple finds that a particular hash is notorious for
             | false positives, they can reject it / ask for a better one.
             | And they're not scanning your library; it's a filter on
             | upload to iCloud. The FUD surrounding this is getting
             | ridiculous.
        
               | runawaybottle wrote:
               | Look, I said it in another post, it is not Apple's job to
               | act as an arm of law enforcement. The same way it is not
               | either of our jobs to be vigilante sheriffs and police
               | the streets.
               | 
               | We're talking about a company that makes phones and
               | computers, and sells music and tv shows via the internet.
               | Does that matter at all?
               | 
               | How about this. All car manufacturers must now wirelessly
               | transmit when the driver of the car is speeding
               | immediately. How about that?
               | 
               | Let's just go all out and embed law enforcement into all
               | private companies.
               | 
               | This is fascism, the merging of corporations and the
               | government.
        
               | vmladenov wrote:
               | It becomes their job when you ask them to host your
               | images on their property. If you don't, then there's
               | nothing happening.
        
           | axelf4 wrote:
           | That is not a good comparison. The extra hashes would help
           | China find out about more borderline citizens than it
           | otherwise would have.
        
             | stjohnswarts wrote:
             | Have we established that a US NGO is accepting "CSAM"
             | hashes from China or that they are cooperating with them at
             | all? That seems unlikely and Apple hasn't yet announced
             | plans with how they're going to scan phones in China, I
             | mean wouldn't China just demand outright to have full
             | scanning capabilities of anything on the phone since you
             | don't have any protection at all from that in China?
        
               | noptd wrote:
               | Sure, but Apple receives far less backlash if the system
               | is applied to all phones and under the guise of "save the
               | children". This would allow Apple to accommodate any
               | nation state's image scanning requirements, which
               | guarantees their continued operation in said markets.
        
               | tablespoon wrote:
               | > Have we established that a US NGO is accepting "CSAM"
               | hashes from China or that they are cooperating with them
               | at all?
               | 
               | I believe Apple's intention is to accept hashes from all
               | governments, not just one US organization. One of their
               | ineffectual concessions to the criticism was to require
               | two governments provide the same hash before they'd start
               | using it.
        
               | avereveard wrote:
               | China can definitely find a state government requiring
               | some cash injection to help push the hash of a certain
               | uninteresting square where nothing happened into the db
        
               | giantrobot wrote:
               | The main announcement was Apple was getting hashes from
               | NCMEC but they also listed ICMEC and have said "and other
               | groups". Much like the source database for the image
               | hashes the list of sources is opaque and covered by vague
               | statements.
        
           | burkaman wrote:
           | Because they don't know who to arrest yet. The idea isn't to
           | fabricate a charge, it's to locate people sharing politically
           | sensitive images that the government hasn't already
           | identified.
        
             | tablespoon wrote:
             | > Because they don't know who to arrest yet. The idea isn't
             | to fabricate a charge, it's to locate people sharing
             | politically sensitive images that the government hasn't
             | already identified.
             | 
             | And maybe even identify avenues for sharing that they
             | haven't already identified and monitored/controlled (e.g.
             | some encrypted chat app they haven't blocked yet).
        
               | nicce wrote:
               | China does not really need Apple to do much. They already
               | make installation of some apps mandatory by law. Also,
               | some communication must be done with WeChat and so on.
               | They have pretty good grip already.
        
               | tablespoon wrote:
               | > They already make installation of some apps mandatory
               | by law. Also, some communication must be done with WeChat
               | and so on.
               | 
               | Can you give some examples of this on the iPhone?
               | 
               | Also, it seems like on the spying front [1] (at least
               | publicly, with Apple), they've preferred more "backdoor"
               | ways to get access over more overt ones, so this scanning
               | feature might encourage asks that they wouldn't have
               | otherwise made.
               | 
               | [1] this contrasts with the censorship front, where
               | they've been very overt
        
         | btilly wrote:
         | _If the CCP says "put these hashes in your database or we will
         | halt all iPhone sales in China", what do you think Apple is
         | going to do?_
         | 
         | I think that the real risk is, "put these hashes in your
         | database or we will stop all iPhone related manufacturing in
         | China".
        
           | vineyardmike wrote:
           | Both would be bad for apple. Chinese sales are already
           | significant enough to their bottom line to comply.
        
         | Y_Y wrote:
         | What about giving a censored version of the appropriate image?
         | Like put a big black rectangle covering whatever awful thing is
         | the subject, and just (e.g.) show some feet and hands and a
         | background.
         | 
         | Then you could provide a proof that an image which is the same
         | as the "censored", one except for the masked part, has the
         | perceptual hash specified. I don't know if this is technically
         | feasible (but I'd be happy for someone knowledgeable to opine).
         | I also admit that there are secondary concerns, like the
         | possibility of recognising the background image, and this being
         | used by someone to identify the location, or tipping off a
         | criminal.
         | 
         | Probably it would only be appropriate to do this in the case of
         | someone being accused, and maybe then in a way where they
         | couldn't relay the information, since apparently they don't
         | want to make the hash database public.
         | 
         | Also, for the record, I'm spitballing here about infosec. This
         | isn't me volunteering to draw black boxes or be called by
         | anyone's defense.
        
         | jrockway wrote:
         | Why does the CCP even need to talk to Apple? They have a
         | database of CSAM, and can modify an image in this set to
         | collide with a special image they're looking for on people's
         | phones. They then share their new modified cache of CSAM with
         | other countries ("hey, China is helping us! that's great!") and
         | it gets added to Apple's database for the next iOS release,
         | because it looks to humans like CSAM. Only the CCP knows that
         | it collides with something special they're looking for.
         | 
         | Now that we know that collisions are not just easy -- but
         | happen by themselves with no human input (as evidenced by the
         | Imagenet collisions), we know this system can't work. Apple has
         | two remaining safeguards: a second set of hashes (they say),
         | and human reviewers. The human reviewers are likely trained to
         | prefer false-positives when unsure, and so while a thorough
         | human review would clearly indicate "not CSAM" for the images
         | the malicious collisions match, it doesn't feel like much of a
         | safeguard to me. Plus, I assume the people reviewing CSAM for
         | the CCP will be in China, so they can be in on the whole
         | scheme. (In the US, we have slightly better checks and
         | balances. Eventually the image will be in front of a court and
         | a jury of your peers, and it's not illegal to have photos that
         | embarrass the CCP, so you'll personally be fine modulo the
         | stress of a criminal investigation. But that dissident in
         | China, probably not going to get a fair trial -- despite the
         | image not being CSAM, it's still illegal. And Apple handed it
         | directly to the government for you.)
         | 
         | I don't know, I just find this thing exceedingly worrying. When
         | faced with a government-level adversary, this doesn't sound
         | like a very good system. I think if we're okay with this, we
         | might as well mandate CCTV in everyone's home, so we can catch
         | real child abusers in the act.
        
         | captn3m0 wrote:
         | Related, the Indian Government (Telecom Department) bullied
         | Apple into building an iOS feature for reporting phone calls
         | and SMS by threatening to stop iPhone sales in India.
         | 
         | Apple complied.
         | 
         | https://indianexpress.com/article/technology/mobile-tabs/app...
        
           | Peaches4Rent wrote:
           | Misinformation. Thats for reporting spam.
           | 
           | It's a feature that's could be done by anyone using sms
           | anyway.
           | 
           | I guess apple had to add an easy way
        
           | c7DJTLrn wrote:
           | I'm so done. I'm sorry to dump a pointless rant like this on
           | HN but... what the hell is going on these days? Nobody
           | seriously seems to care about legitimate privacy concerns
           | anymore. If I were in a position of power, like being CEO,
           | CTO, or even just an engineer on the team at Apple that
           | implemented this, I'd do EVERYTHING to make sure that my
           | power is in check and that I'm not pushing a fundamentally
           | harmful technology.
           | 
           | I just feel so lost and powerless these days, I don't know
           | how much longer I can go on when every piece of technology I
           | own is working against me - tools designed to serve a ruling
           | class instead of the consumer. I don't like it one bit.
        
             | nescioquid wrote:
             | > I don't know how much longer I can go on when every piece
             | of technology I own is working against me - tools designed
             | to serve a ruling class instead of the consumer.
             | 
             | This made me think of the Butlerian Jihad in Dune:
             | 
             | "Once men turned their thinking over to machines in the
             | hope that this would set them free. But that only permitted
             | other men with machines to enslave them."
             | 
             | I'd say the typical remedy that societies have adopted for
             | these sorts of things is legislation, though regulatory
             | capture[1] is an issue that blocks the way.
             | 
             | [1] https://en.wikipedia.org/wiki/Regulatory_capture
        
             | bitL wrote:
             | Buy Sony Xperia 10 ii, then buy an official SailfishOS
             | package from shop.jolla.com and flash it over Android 11 -
             | enjoy a polished mobile OS without snitches, which you
             | fully control, a real pocket computer instead of a future
             | pocket policeman.
        
             | rsj_hn wrote:
             | What is going on is that reality is slapping some techno-
             | utopians in the face and they are shocked, shocked, that
             | governments are more powerful that businesses.
             | 
             | That's not at all what the lefty geeks learned by reading
             | Chomsky or what the righty geeks learned by reading
             | Heinlein.
             | 
             | All along these people thought algorithms and protocols
             | (e.g. bitcoin and TCP/IP) would somehow be a powerful force
             | that would cause governments to fall on their knees and let
             | people evade government control. After all, it's
             | distributed! You can't stop it!
             | 
             | Well, that was all very foolish, because they mistook
             | government _uninterest_ in something for the equivalent of
             | government being powerless to control it, and when
             | governments did start taking an interest in something, it
             | turns out that protocols and algorithms are no defense
             | against the realities of political power. It is to the
             | field of politics, and not the field of technology, that
             | one must turn in order to increase collective freedoms.
             | Individual freedom can be increased by obtaining money or
             | making lots of friends, but collective freedom cannot be
             | increased this way, it can only be increased by organizing
             | and influencing government.
        
               | JunkDNA wrote:
               | Bingo. I wish I could upvote this comment more. All the
               | geeks get distracted by words like "cloud" or "virtual"
               | and forget that all this stuff we depend on has a
               | physical presence at some point in the real world. That
               | physical presence necessitates humans interacting with
               | other humans. Humans interacting with humans falls
               | squarely in the "things governments poke their noses
               | into" bucket. It's like the early days of Napster when
               | people were all hot for "peer to peer", as if that tech
               | was some magic that was going to make record labels and
               | governments throw up their hands over copyrights.
        
               | hnechochamber wrote:
               | Dont worry, ill fix all this by creating a unique
               | javascript framework that will change the world.
        
               | danudey wrote:
               | Maybe we could make this framework future-proof by using
               | blockchains? Somehow? Maybe it can use blockchains, or it
               | can be stored on a blockchain, or maybe both at the same
               | time. Surely that will help society in some nonspecific,
               | ambiguous manner.
        
               | dweekly wrote:
               | (Gestures at the community of people who smugly use the
               | word "fiat" as commensurate with obsolete)
               | 
               | Not sure everyone has gotten the memo yet.
        
               | AnthonyMouse wrote:
               | > All along these people thought algorithms and protocols
               | (e.g. bitcoin and TCP/IP) would somehow be a powerful
               | force that would cause governments to fall on their knees
               | and let people evade government control. After all, it's
               | distributed! You can't stop it!
               | 
               | But that's the underlying problem here. Apple _isn 't_ a
               | standardized protocol or a distributed system. It's a
               | monolithic chokepoint.
               | 
               | You can't do this with a PC. Dell and HP don't retain the
               | ability to push software to hardware they don't own after
               | they've already sold it and against the will of the
               | person who does own it.
               | 
               | People pointed out that this would happen. Now it's
               | happening. Que sorpresa.
        
               | danudey wrote:
               | Dell ships laptops with tons of Dell software, as well as
               | tons of third-party software. Do you really think that,
               | if they wanted to, they couldn't just update one of those
               | pieces of software to enable remote installs?
               | 
               | Hell, Dell has shipped more than one bug that allowed
               | attackers administrator-level access or worse, I wouldn't
               | put it past them to come up with some kind of asinine
               | feature that not only lets them push new
               | software/drivers/whatever to the machine, but lets
               | attackers do so as well.
        
               | danudey wrote:
               | > All along these people thought algorithms and protocols
               | (e.g. bitcoin and TCP/IP) would somehow be a powerful
               | force that would cause governments to fall on their knees
               | and let people evade government control. After all, it's
               | distributed! You can't stop it!
               | 
               | The internet and its design and associated protocols were
               | designed to work around external forces - a nuclear
               | attack or natural disaster. It was never designed to be
               | government-proof. People who thought that would be the
               | case were being idealistic and naive.
               | 
               | If you want real change in the world, as you said, you
               | have to affect the political world, which is an option
               | available to any citizen or corporation who can spend
               | millions on lobbyists.
        
             | andrei_says_ wrote:
             | My only conclusion is that Apple is not arguing in good
             | faith. That CSAM prevention is an excuse.
             | 
             | This makes me very sad.
        
             | gsibble wrote:
             | The feeling is mutual. The devices we own are now being
             | used to actively police us.
        
               | wobbly_bush wrote:
               | The parent comment might have misunderstood what the
               | government asked for. It asked for a feature to "report
               | spam" by end users.
        
               | c7DJTLrn wrote:
               | Maybe I did. But what difference does it make? There's
               | plenty of other instances where Apple has reportedly been
               | bullied into action or inaction (being dissuaded from
               | implementing E2EE for iCloud is one example). I've really
               | just reached a breaking point and I'm sorry if logic does
               | not apply.
        
             | domador wrote:
             | There are many of us that DO care! Unfortunately, even
             | though we are many, we are still a small minority among the
             | general population, or probably even among software
             | developers.
             | 
             | Convenience and fashion tend to trump security and
             | principles for most people. (Oftentimes, I'm one of those
             | people as well, though I try not to be. It's exhausting to
             | be an activist 100% of the time. But let's keep at it!)
        
             | t0mas88 wrote:
             | I'm as surprised as you are that a giant like Apple doesn't
             | just tell them "go ahead, ban iPhones, see how popular
             | they'll become" to someone as powerless as the government
             | of India. It would be a huge free publicity campaign for
             | them in the rest of the world while the public in India
             | would either put pressure on their government or buy
             | iPhones via import websites.
             | 
             | For additional fun, strike a deal with the #2 non
             | government owned carrier in whichever country you do this
             | to. Offer the iPhone at a special rate for a few months.
             | They would kill the government telco while selling record
             | numbers of phones with free publicity. And at the same time
             | scare any other government into not trying this kind of
             | stunt with Apple ever again.
        
               | FridayoLeary wrote:
               | Too political. It would just scare consumers away.
        
               | FabHK wrote:
               | How much fun would that be for customers if India then
               | decided to confiscate every iPhone it encounters within
               | India (maybe excepting tourists, but maybe not)?
        
               | gremloni wrote:
               | "Powerless" is not how I would refer to the Indian
               | government.
        
             | [deleted]
        
           | markdown wrote:
           | > an iOS feature for reporting phone calls and SMS
           | 
           | Why doesn't the US govt follow India's govt? I've read that
           | Americans can receive up to 4 unsolicited calls a day.
        
             | bsenftner wrote:
             | I get about 10 every single fucking day, super annoying,
             | and they are spoofing numbers too: I get calls from hotels
             | and restaurants in my address book, yet they are not being
             | called from there, I hear a series of clicks and then
             | someone asks me about my auto insurance... The moment I
             | hear clicks now, I just hang up, if I answer at all. I am
             | ready to simply give up phones entirely. Fucking complete
             | failure by the telecoms, their entire industry is a
             | consumer failure.
        
           | runawaybottle wrote:
           | I think few people are making the appropriate parallel. What
           | we're looking at is not necessarily government overreach, but
           | fascism.
           | 
           | When the hell did it become Apple's job to do this? Apple is
           | not a branch of law enforcement. The government needs
           | warrants for stuff like this. We are merging corporate and
           | government interests here. Repeat after me, Apple is not
           | supposed to be a branch of law enforcement.
           | 
           | It also says a lot about us, that we are beholden to a
           | product. We have to ditch these products.
        
             | mLuby wrote:
             | > When the hell did it become Apple's job to do this?
             | 
             | Apple provided a pathway, however unintentionally, to
             | greater power. And those in power used their existing
             | authority to gather even more for themselves, as they
             | _always_ do.
             | 
             | Like drops flow into streams into rivers into oceans, power
             | aggregates at the top until regime change spills it back to
             | the ground.
        
           | kburman wrote:
           | I'm amazed on how much mis-information is spread. Featured we
           | are talking about here is for reporting spam number which is
           | a done by user not automatically. This is widely available in
           | Android already.
           | 
           | Correct me if I'm wrong but this feature needs an app install
           | from the app store.
        
           | danudey wrote:
           | They added a feature which is off by default and allows a
           | user to select a supported installed app to use as a spam
           | reporting app.
           | 
           | IMHO this is great, I wish more countries would enable this
           | feature. Something like 95% of my phone calls are spam, to
           | the point where I just don't answer the phone anymore unless
           | they're in my contacts list. Users being able to actually
           | report them as spam might actually result in this BS finally
           | stopping.
        
           | [deleted]
        
         | tshaddox wrote:
         | If the CCP says "put this arbitrary software into your next
         | iPhone software update or we will halt all iPhone sales in
         | China," what do you think Apple is going to do? Isn't the
         | answer to both questions the same?
        
           | fsflover wrote:
           | The answers are probably the same. What different is how hard
           | it is to discover.
        
             | tshaddox wrote:
             | It apparently wasn't hard to "discover" the fact that this
             | CSAM database can and will change over time. In fact, Apple
             | explained this in detail as well as how they are attempting
             | to avoid the problem of governments abusing the system. Are
             | you suggesting that a different software update might be
             | even easier to discover?
        
               | [deleted]
        
           | noptd wrote:
           | So we should simply accept systems with a high potential for
           | abuse because of the possibility that something bad is
           | already being done?
        
             | tshaddox wrote:
             | What has more potential for abuse than the fact that Apple
             | can push any software they want to iPhones at any time?
        
               | drdeca wrote:
               | Can they force install updates? I think it's been over a
               | year since I updated my iPhone's os.
        
               | tshaddox wrote:
               | Well, no, I don't think they have any way of force
               | install updates, but that would also apply to this update
               | to add CSAM hashing.
        
               | bsenftner wrote:
               | If your wifi is enabled and the phone is attached to
               | power, apple updates without any request. I've been
               | trying not to update my iPhone, but recently I left the
               | wifi on when charging and it updated.
        
               | tshaddox wrote:
               | Yeah, it's not a practical way to use your iPhone. For
               | all practical purposes, Apple has unrestricted ability to
               | push updates to iPhones.
        
           | numbsafari wrote:
           | What makes us think that that isn't exactly what this is
           | already?
        
             | dkdk8283 wrote:
             | This is already happening to phones. The baseband blobs are
             | proprietary and most devices permit DMA.
             | 
             | Nobody really knows what the blobs do. They likely have
             | paved the way for Stingrays and other devices.
        
               | t0mas88 wrote:
               | In Android yes, but Apple has an advantage here that they
               | control the whole device.
               | 
               | If Apple really wanted to they could secure the iPhone.
               | Just offer a 2x higher bug bounty than any government and
               | things like stingray would not work.
        
           | [deleted]
        
           | biztos wrote:
           | It's a fair question, but I think the answer is no: the
           | questions are not the same.
           | 
           | As much as Apple wants access to the Chinese market, it would
           | (presumably) draw a line at some point where it would
           | (presumably) have to choose between that market and the US
           | market, if only because the latter is both its legal domicile
           | and the source of most of its talent.
           | 
           | Version A: CCP wants to exploit the hash database, there are
           | lots of ways to do that, bullying Apple is one, any other way
           | gives Apple a "we are looking into it" excuse. "We must
           | comply with local laws, but we will not change our software
           | bla bla."
           | 
           | Version B: CCP wants to exploit iOS, only way to do it is to
           | bully Apple, this forces Apple's hand and very possibly Apple
           | moves production (not just sales) out of China because they
           | no longer trust they will be offered "plausible deniability."
           | 
           | I'm sure there are lots of reasons for that absurd cash
           | reserve, but my best guess is it's to cover the eventuality
           | of B. above; Apple talking about that publicly would be
           | tricky.
           | 
           | https://www.cnbc.com/2020/07/30/apple-q3-cash-hoard-heres-
           | ho...
        
             | FabHK wrote:
             | > very possibly Apple moves production (not just sales) out
             | of China
             | 
             | As a matter of fact, I am not sure that would be possible:
             | It might well be that no other country has the capacity
             | (machines and labour) to churn out that many iPhones. Would
             | be interesting to hear if anyone has insight on that. (Tim
             | Cook presumably knows...)
        
               | biztos wrote:
               | This has been Apple's line for quite a while, but over
               | the last ten years I can't believe Mr Cook has not come
               | up with a Plan B, given that the volatility in US-China
               | relations is much more likely to affect iPhones than most
               | other Walmart goods.
               | 
               | I admit it's total speculation, but I think the massive
               | cash reserves are for that: to weather a disruption in
               | production facilities and move production to a more US-
               | friendly location.
        
           | jonplackett wrote:
           | If they do one of those, it will be obvious it happened,
           | because people can at least reverse engineer iOS and see that
           | it's different.
           | 
           | If they add some new hashes I presume that would be harder to
           | spot and isn't going to be advertised
        
             | TheSpiceIsLife wrote:
             | Software vulnerabilities can go undetected for years, why
             | would this necessarily be any different?
        
             | ilogik wrote:
             | the hashes are also shipped with every device and can be
             | inspected.
             | 
             | A few bytes changed in a binary blob that would enable a
             | backdoor would be almost impossible to detect.
        
         | ljm wrote:
         | > two countries                   1. USA              2. UK,
         | Canada, Australia, New Zealand
         | 
         | Pick one from the first list, and one from the second list
        
         | programzeta wrote:
         | Apple's trying to mitigate this by putting themselves to a more
         | internationally focused standard of matches against at least
         | two separate sources of CSA hashes, if my understanding of
         | their announcements/follow-ups is right.
         | 
         | Separately, the US has even greater pressure on Apple in the
         | case they want to unilaterally add database images, considering
         | they have a actual chance and means to jail (and run through
         | the legal ringer) whomever tells them 'no'. And that's just the
         | overt pressure available; I think this is a more likely
         | potential for trust violation here, even if both could come to
         | pass.
        
         | beepbooptheory wrote:
         | The sinophobia of this website is getting to be more than
         | tiring and really just becoming worrisome.
         | 
         | Would never of bet that such smart and thoughtful people in
         | general could be so so stuck within certain political talking
         | points, to an almost fanatical degree. If people on HN are
         | generally like this, I have little hope for the rest of our
         | world and its future.
         | 
         | At the end of the day all the bad guys will still win, even if
         | we did give a lot of attention to a subset of them for a while
         | I guess.
        
         | dwighttk wrote:
         | Apple doesn't control the database
        
       | pizzaknife wrote:
       | the world of forensics is something im not familiar with, a
       | couple questions: - When a hash matches (correctly or
       | incorrectly), how is said image reported? is the matching image
       | passed on to a human to verify or? - what is the survey size /
       | who(m) are subject to this CSAM net?
        
         | rootusrootus wrote:
         | after ~30 matches, only the matching images are passed on to a
         | human for visual verification. only images uploaded to iCloud
         | are subject to matching
        
           | Dah00n wrote:
           | If they pass CSAM verfied by hash on to human verification
           | inside Apple they break the law. Not even the FBI are allowed
           | to do that. Only NMCEC is an allowed recipient by US federal
           | law.
        
             | nonbirithm wrote:
             | > Not even the FBI are allowed to do that.
             | 
             | The FBI was given clearance to take over a website serving
             | CSAM in order to catch more users of the site. As such, the
             | FBI has technically distributed CSAM in the past.
             | 
             | https://www.dallasnews.com/news/crime/2017/01/17/the-fbi-
             | ran...
        
             | rootusrootus wrote:
             | Seems to be a misunderstanding between what the law appears
             | to say and what the actual practice is. Law enforcement's
             | interest is not served by trying to prosecute moderators or
             | companies acting in good faith because they have CSAM in
             | their possession.
        
               | nybble41 wrote:
               | There is a difference between moderators manually
               | identifying illegal content in a stream of mostly-legal
               | material and a process where content which has already
               | been matched against the database and classified as
               | almost-certainly-illegal is subjected to further internal
               | review.
        
               | rootusrootus wrote:
               | AFAIK moderators at other organizations are also only
               | reviewing content that has already been flagged somehow.
               | I don't think it makes a difference. It comes down to
               | good faith. If the company follows the recommendations of
               | NCMEC on handling the material (and NCMEC absolutely does
               | provide those recommendations), I doubt they're in any
               | danger at all.
               | 
               | Obviously you could not make the same argument yourself
               | unless you were also a reputable megacorp. There are
               | upsides to being king. In this case, NCMEC wants the
               | actual perps in jail so they're not going to take shots
               | at Apple or its employees on technicalities.
        
               | mrtranscendence wrote:
               | The chance of a match being CSAM is not almost certain,
               | though. Further, Apple only gets a low-resolution version
               | of the image. In any case, presumably such issues have
               | been addressed, as neither the FBI nor NCMEC have raised
               | a stink about it.
        
               | nybble41 wrote:
               | > The chance of a match being CSAM is not almost certain,
               | though.
               | 
               | Not according to Apple. They're publicly claiming a one-
               | in-a-trillion false positive rate from the automated
               | matching. Either that's blatant false advertising or
               | they're putting known (as in: more likely than not) CSAM
               | in front of their human reviewers. Can't have it both
               | ways.
               | 
               | > Further, Apple only gets a low-resolution version of
               | the image.
               | 
               | Which makes zero difference with regard to the content
               | being illegal. Do you think they would overlook _you_
               | possessing an equally low-resolution version of the same
               | photo?
               | 
               | > In any case, presumably such issues have been
               | addressed, as neither the FBI nor NCMEC have raised a
               | stink about it.
               | 
               | Selective enforcement; what else is new? It's still a
               | huge risk for Apple to take when the ethically superior (
               | _and cheaper and simpler_ ) solution would be to encrypt
               | the files on the customer's device, _not_ scanning them
               | first, and store the backups with proper E2E encryption
               | such that Apple has no access to or knowledge of any of
               | the content.
        
       | ctur wrote:
       | The more interesting point about hash collisions is probably less
       | about accidental clashes and more about _intentional_ clashes. If
       | the hashes in the CSAM database were known publicly, and people
       | began generating intentional clashes with innocuous images, and
       | those images were on many phones, it could basically DOS whatever
       | manual process Apple creates.
       | 
       | Basically it could become an arms race where, say, free speech
       | advocates convince people to just keep and share some images and
       | overwhelm the process. Then Apple adapts to new hashes,
       | blocklists known false positives, and the cycle repeats.
        
       | kebman wrote:
       | Oh boy... I'm just here from watching WEF videos on YouTube. I'm
       | gonna go hide in a cave now.
        
       | dan-robertson wrote:
       | It didn't seem to take long for the weights for Apple's network
       | to be discovered. And I suppose they must send the banned hashes
       | to the client for checking too. So I expect that list will be
       | discovered and published soon too (unless they have some way to
       | keep them secret?) I think one important question is: how
       | reversible is Apple's perceptual hash?
       | 
       | For example, my understanding of Microsoft's PhotoDNA is that
       | their perceptual hash has been reverse-engineered and that one
       | could go backwards from a hash to a blurry image. But also it is
       | very hard to get the list of PhotoDNA hashes for the NCMEC
       | database. In other words, are Apple unintentionally releasing
       | enough information to reconstruct a bunch of blurry CSAM?
        
         | yeldarb wrote:
         | It doesn't seem like it; check out the adversarially
         | constructed images here. They don't look anything like the
         | original despite perfectly matching the NeuralHash:
         | https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX/issue...
        
           | dan-robertson wrote:
           | Right obviously the hashing isn't going to be injective and
           | therefore there are lots of silly images that hash to a given
           | value. The question is whether it is possible to efficiently
           | find plausible images with a given hash.
           | 
           | Think more like a "deep dream" than these adversarial
           | attacks.
        
         | fortenforge wrote:
         | The hashes sent to the client are cryptographically blinded,
         | making it impossible for the client to determine the original
         | CSAM hashes.
        
           | dan-robertson wrote:
           | This is the information I was missing, thanks.
        
         | kccqzy wrote:
         | > And I suppose they must send the banned hashes to the client
         | for checking too.
         | 
         | They absolutely do not. They use Private Set Intersection to
         | achieve that.
        
       | solidgol wrote:
       | I think the headline is misleading.
       | 
       | The point of the article is that the algorithm that Apple is
       | using works as well as Apple says it does, and the headline
       | implies that it doesn't.
        
       | smithza wrote:
       | This is a false-positive rate of 2 in 2 trillion image pairs
       | (1,431,168^2). Assuming the NCMEC database has more than 20,000
       | images, this represents a slightly higher rate than Apple had
       | previously reported. But, assuming there are less than a million
       | images in the dataset, it's probably in the right ballpark.
       | 
       | If the author was comparing 2 trillion pictures of people, or
       | children specifically, I think this false-positive rate would be
       | different and arguably much higher. The reasons are obvious:
       | humans are similar in dimensions to eachother and are much more
       | likely to match in the same way the hatchet and nematode matched.
       | 
       | I do not presume such a finding of photos is easy to come by but
       | I wish the author put details on the sample set.
        
         | yeldarb wrote:
         | The sample set is ImageNet, which is a well-known dataset in
         | Computer Vision and is available for download here:
         | https://www.kaggle.com/c/imagenet-object-localization-challe...
         | 
         | I'd love to see this work extended; if you find additional
         | collisions in the wild please submit a PR to the repo (please
         | do not submit artificially generated adversarial images):
         | https://github.com/roboflow-ai/neuralhash-collisions
         | 
         | For what it's worth, Apple claimed to find a _lower_ incidence
         | of false-positives when it used pornographic images in its
         | test[1] (which makes sense; images containing humans is
         | probably more aligned with what the model was trained on than
         | nematodes)
         | 
         | [1] https://tidbits.com/2021/08/13/new-csam-detection-details-
         | em...
         | 
         | > In Apple's tests against 100 million non-CSAM images, it
         | encountered 3 false positives when compared against NCMEC's
         | database. In a separate test of 500,000 adult pornography
         | images matched against NCMEC's database, it found no false
         | positives.
        
           | tgsovlerkhgsel wrote:
           | 0 out of 0.5 million vs. 3 out of 100 million does not imply
           | with any reasonable confidence that the incidence in porn is
           | lower.
           | 
           | You'd expect the same result even if the incidence in porn
           | was 10x higher than in typical images (30 in 100 million =
           | 0.15 in 0.5 million).
        
           | contravariant wrote:
           | Well to know for sure if that's lower we'd need to know the
           | size of the NCMEC database. They're the same if the NCMEC
           | contains around 10 000 images.
           | 
           | Though knowing that 1 in roughly 30 million images generates
           | a false positive is the most important figure I suppose.
           | Assuming 100 million iPhones with each 1000 pictures that
           | would generate some 3000 phones with one or more false
           | positives [1] and a roughly 5% chance that some phone has at
           | least 2 false positives.
           | 
           | [1]: https://www.wolframalpha.com/input/?i=%28100+million%29+
           | e%5E... [2]: https://www.wolframalpha.com/input/?i=+%281+-+e%
           | 5E-l+%28e%5E...
        
           | smithza wrote:
           | For what it's worth, Apple claimed to find a _lower_
           | incidence of false-positives when it used pornographic images
           | in its test[1] (which makes sense; images containing humans
           | is probably more aligned with what the model was trained on
           | than nematodes)
           | 
           | This is an important note. Is it the case that this algorithm
           | is trained for humans or not? the 1/trillion false-positive
           | rate might imply it is trained with a broader set.
           | 
           | Thank you for those helpful tidbits.
        
           | fallingknife wrote:
           | 500K is not a large enough dataset to determine that. The
           | collision rate could plausibly be 1 in 500K (or even a bit
           | higher) and have no collisions in the sample.
        
       | rm999 wrote:
       | tldr: this is expected. The article addresses everything I'm
       | about to say, but I think the lede is buried:
       | 
       | >Apple's NeuralHash perceptual hash function performs its job
       | better than I expected
       | 
       | When you are just looking for any two collisions in 100M x 100M
       | comparisons, of course you'll find a small number of positives,
       | Apple said as much. The number of expected collisions will scale
       | linearly with the number of 'bad' images, which is not 100M.
       | Assuming it's 100k, we'd expect 1000x fewer collisions in
       | ImageNet, or ~0.002 collisions, which is effectively 0. It's the
       | artificial images that will potentially sink all this, not a very
       | low rate of naturally occurring hash collisions.
        
       | saithound wrote:
       | Keep in mind that Apple's claimed false positive rate (one in a
       | trillion chance of an account being flagged innocently), and the
       | collision rate determined by Dwyer in the article, are both
       | derived without any adversarial assumptions. Given that
       | NeuralHash collider and similar tools already exist, the false
       | positive rate is expected to be much much higher.
       | 
       | Imagine that you play a game of craps against an online casino.
       | The casino throws a virtual six-sided die, secretly generated
       | using Microsoft Excel's random number generator. Your job is to
       | predict the result. If you manage to predict the result 100 times
       | in a row, you win and the casino will pay you $1000000000000 (one
       | trillion dollars). If you ever fail to predict the result of a
       | throw, the game is over, you lose and you pay the casino $1 (one
       | dollar).
       | 
       | In an ordinary, non-adversarial context, the probability that you
       | win the game is much less than one in one trillion, so this game
       | is very safe for the casino. But this number is very misleading:
       | it's based on naive assumptions that are completely meaningless
       | in an adversarial context. If your adversary has a decent
       | knowledge of mathematics at the high school level, the serial
       | correlation in Excel's generator comes into play, and the
       | relevant probability is no longer one in one trillion. The
       | relevant number is 1/216 instead! When faced with a class of
       | adversarial math majors, a casino that offers this game will
       | promptly go bankrupt. With Apple's CSAM detection, you get to be
       | that casino.
        
         | mrtranscendence wrote:
         | Why would anyone bother with such an attack? The end result is
         | that some peon at Apple has to look at the images and mark them
         | as not CSAM. You've cost someone a bit of privacy, but that's
         | it.
        
           | [deleted]
        
           | cheald wrote:
           | Why would anyone bother calling the cops and telling them
           | that someone they don't like is an imminent threat? The end
           | result is that some officer just has to stop by and see that
           | they aren't actually building bombs. You've cost someone a
           | bit of time, but that's it.
        
           | csmpltn wrote:
           | > "The end result is that some peon at Apple has to look at
           | the images and mark them as not CSAM. You've cost someone a
           | bit of privacy, but that's it."
           | 
           | This can be abused to spam Apple's manual review process,
           | grinding it down to a halt. You've cost Apple time and money
           | by making them review each such fake report.
        
             | incrudible wrote:
             | > You've cost Apple time and money by making them review
             | each such fake report.
             | 
             | Ok, but... how do I profit? If I wanted to waste Apple
             | employee time, I could surely find a way to do it, but why
             | would I? The functioning of society relies on the fact that
             | people generally have better things to do than waste each
             | others time.
        
           | [deleted]
        
           | nomel wrote:
           | Why is this question being downvoted? I too would like to
           | know what this attack achieves.
           | 
           | From what I see, the end result of false flagging is either
           | someone has CSAM in iCloud and you push them over the
           | threshold that results in reporting and prosecution, or there
           | is no CASM, so the reviewer sees all of the hash collision
           | images, including those that are natural.
           | 
           | Is the problem that an attacker can force natural hash
           | collision images to be viewed by a reviewer, violating that
           | persons privacy? Do we know if this process is different than
           | how Google, Facebook, Snapchat, Dropbox, Microsoft, and
           | others have implemented these necessarily fuzzy matches for
           | their CSAM scans of cloud hosted?
           | 
           | Or am I missing something that the downvoters saw?
        
             | stefan_ wrote:
             | You are one underpaid random guy in India looking at CSAM
             | all day clicking the wrong button away from a raid of your
             | home and the end of your life as you know it.
        
               | Notanothertoo wrote:
               | This, these charges are damning once they are made. Plus
               | the countless legal dollars you are going to have to
               | front and hours spent proving innocence and that's
               | assuming the justice system actually works.. Try explain
               | this to your employer while you start missing deadlines
               | due to court dates.. The police also could easily
               | leverage this to warrant hop. As they have been found
               | doing in the past. I think the bike rider who had nothing
               | to do with a crime and got accused because he was the
               | only one in a broad geo location Warren is all the
               | president of of you need that this will be abused.
        
             | foota wrote:
             | The idea I've heard is that images could be generated that
             | are sexual in nature but that have been altered to match a
             | CSAM hash, making a tricky situation.
        
           | vineyardmike wrote:
           | > The end result is that some peon at Apple has to look at
           | the images and mark them as not CSAM.
           | 
           | As others said, if the non-csam looks sexual at all, they'll
           | probably get flagged for post-apple review.
           | 
           | Beyond that, it doesn't seem to be in apple's interest to be
           | conservative in flagging. An employee reviewer's best
           | interest is to minimize false negatives not false positives.
           | 
           | As many mentioned, even an investigation can have horrible
           | affects on some (innocent) person's life. I would not be
           | shocked to learn that some crafty individual working at "meme
           | factories" creating intentional collisions with distributed
           | images just for "fun" - and politically motivated attacks
           | seem plausible (eg. make liberal political memes flag CSAM).
           | 
           | Then there are targeted motives for an attack. Have a
           | journalist you want to attack or find a reason to warrant?
           | Find them on dating app and send them nudes with CSAM
           | collisions. Or any number of other targetted attacks against
           | them.
        
           | 0x0 wrote:
           | Seeing how "well" app review works, I would not be surprised
           | if the "peon" sometimes clicks the wrong button while
           | reviewing, bringing down a world of hurt on some innocent
           | apple user, all triggered by the embedded snitchware running
           | on their local device.
        
           | e40 wrote:
           | _> The end result is that some peon at Apple has to look at
           | the images and mark them as not CSAM_
           | 
           | Btw, this reminded me of a podcast about FB's group to do
           | just this. Because it negatively impacted the mental health
           | of those FB employees, they farmed it out to the contractors
           | in other countries. There were interviews with women in the
           | Philippines, and it was having the same impact there.
        
             | fay59 wrote:
             | It's quite possible that you implied this, but I think that
             | the true positives are the ones that had a mental health
             | toll.
        
               | e40 wrote:
               | That's correct. It wasn't just CSAM. The described images
               | were sickening.
        
           | saithound wrote:
           | Okay, let's play peon. Here are three perfectly legal and
           | work-safe thumbnails of a famous singer:
           | https://imgur.com/a/j40fMex. The singer is underage in
           | precisely one of the three photos. Can you decide which one?
           | 
           | If your account has a large number of safety vouchers that
           | trigger a CSAM match, then Apple will gather enough fragments
           | to reassemble a secret key X (unique to your device) which
           | they can use to decrypt the "visual derivatives" (very low
           | resolution thumbnails) stored in all your matched safety
           | vouchers.
           | 
           | An Apple employee looks at the thumbnails derived from your
           | photos. The only judgment call this employee gets to make is
           | whether it can be ruled out (based on the way the thumbnail
           | looks) that your uploaded photo is CSAM-related. As long as
           | the thumbnail contains a person, or something that looks like
           | the depiction of a person (especially in a vaguely violent or
           | vaguely sexual context, e.g. with nude skin or skin with
           | injuries) they will not be able to rule out this possibility
           | based on the thumbnail alone. And they will not have access
           | to anything else.
           | 
           | Given the ability to produce hash collisions, an adversary
           | can easily generate photos that fail this visual inspection
           | as well. This can be accomplished straightforwardly by using
           | perfectly legal violent or sexual material to produce the
           | collision (e.g. most people would not suspect foul play if
           | they got a photo of genitals from their Tinder date). But
           | much more sophisticated attacks [2] are also possible: since
           | the computation of the visual derivative happens on the
           | client, an adversary will be able to reverse engineer the
           | precise algorithm.
           | 
           | While 30 matching hashes are probably not sufficient to
           | convict somebody, they're more than sufficient to make
           | somebody a suspect. Reasonable suspicion is enough to get a
           | warrant, which means search and seizure, computer equipment
           | hauled away and subjected to forensic analysis, etc. If a
           | victim works with children, they'll be fired for sure. And if
           | they do charge somebody, it will be in Apple's very best
           | interest not to assist the victim in any way: that would
           | require admitting to faults in a high profile algorithm whose
           | mere existence was responsible for significant negative
           | publicity. In an absurdly unlucky case, the jury may even
           | interpret "1 in 1 trillion chance of false positive" as "way
           | beyond reasonable doubt".
           | 
           | Chances are the FBI won't have the time to go after every
           | report. But an attack may have consequences even if it never
           | gets to the "warrant/charge/conviction" stage. E.g. if a
           | victim ever gets a job where they need to obtain a security
           | clearance, the Background Investigation Process will reveal
           | their "digital footprint", almost certainly including the
           | fact that the FBI got a CyberTipline Report about them. That
           | will prevent them from being granted interim determination,
           | and will probably lead to them being denied a security
           | clearance.
           | 
           | (See also my FAQ from the last thread [1], and an explanation
           | of the algorithm [3])
           | 
           | [1] https://news.ycombinator.com/item?id=28232625
           | 
           | [2] https://graphicdesign.stackexchange.com/questions/106260/
           | ima...
           | 
           | [3] https://news.ycombinator.com/item?id=28231218
        
             | robertoandred wrote:
             | Apple can only ever see the visual derivatives in vouchers
             | of images that match CSAM hashes, not vouchers of all your
             | images.
        
               | saithound wrote:
               | Yep. I'm aware of this, and it doesn't affect the point I
               | was making, but it's worth pointing out. I made an edit
               | to the text to make this explicit.
        
             | ChrisKnott wrote:
             | You're adding quite a lot of technobabble gloss to an
             | "attack vector" that boils down to "people can send you
             | images that are visually indistinguishable from known
             | CSAM".
             | 
             | Guess what, they can already do this but worse by just
             | sending you actual illegal images of 17.9 year olds.
             | 
             | While it would be bad to be subjected to such an attack,
             | and there is a small chance it would lead to some kind of
             | interaction with law enforcement, the outcomes you present
             | are just scaremongering and not reasonable.
        
               | saithound wrote:
               | I suggest you reread the comment, because "people can
               | send you images that are visually indistinguishable from
               | known CSAM" is not what is being said at all. Where did
               | you even get that from?
               | 
               | The point is precisely that people can become victims of
               | various new attacks, without ever touching photos that
               | are actual "known CSAM". For Christ's sake, half the
               | comments here are about how adversaries can create and
               | spread political memes that trigger automated CSAM
               | filters on people's phones just to "pwn the libz".
               | 
               | > Guess what, they can already do this but worse by just
               | sending you actual illegal images of 17.9 year olds.
               | 
               | No, this misses the point completely. You cannot easily
               | trigger any automated systems merely by taking photos of
               | 17.9 year olds and sending them to people. E.g. your own
               | photos are not in the NCMEC databases, and you'd have to
               | reveal your own illegal activities to get them in there.
               | You (or malicious political organizations) especially
               | cannot attack and expose "wrongthinking" groups of people
               | by sending them photos of 17.9 year olds.
        
               | ChrisKnott wrote:
               | Can you explain how these theoretical political memes
               | hash-match to an image in the NCMEC database, and then
               | also pass the visual check?
               | 
               | > _" No, this misses the point completely. You cannot
               | easily trigger any automated systems merely by taking
               | photos of 17.9 year olds and sending them to people."_
               | 
               | Did I say "taking"? I am talking about sending
               | (theoretical) actual images from the NCMEC database. This
               | is functionally identical to the "attack" you describe.
        
               | saithound wrote:
               | Yes, I can. This is just one possible strategy: there are
               | many others, where different things are done, and where
               | things are done in a different order.
               | 
               | You use the collider [1] and one of the many scaling
               | attacks ([2] [3] [4], just the ones linked in this
               | thread) to create an image that matches the hash of a
               | reasonably fresh CSAM image currently circulating on the
               | Internet, and resizes to some legal sexual or violent
               | image. Note that knowing such a hash and having such an
               | image are both perfectly legal. Moreover, since the
               | resizing (the creation of the visual derivative) is done
               | on the client, you can tailor your scaling attack to the
               | specific resampling algorithm.
               | 
               | Eventually, someone will make a CyberTipline report about
               | the actual CSAM image whose hash you used, and the image
               | (being a genuine CSAM image) will make its way into the
               | NCMEC hash database. You will even be able to tell
               | precisely when this happens, since you have the client-
               | side half of the PST database, and you can execute the
               | NeuralHash algorithm.
               | 
               | You can start circulating the meme before or after this
               | step. Repeat until you have circulated enough photos to
               | make sure that many people in the targeted group have
               | exceeded the threshold.
               | 
               | Note that the memes will trigger automated CSAM matches,
               | and pass the Apple employee's visual inspection: due to
               | the safety voucher system, Apple will not inspect the
               | full-size images at all, and they will have no way of
               | telling that the NeuralHash is a false positive.
               | 
               | [1] https://github.com/anishathalye/neural-hash-collider
               | 
               | [2] https://embracethered.com/blog/posts/2020/husky-ai-
               | image-res...
               | 
               | [3] https://bdtechtalks.com/2020/08/03/machine-learning-
               | adversar...
               | 
               | [4] https://graphicdesign.stackexchange.com/questions/106
               | 260/ima...
        
               | incrudible wrote:
               | This leaves open the question of how the image gets on
               | the device of the victim. You would have to craft a very
               | specific image that the victim is likely to save, and the
               | existence of such a specially crafted file would
               | completely exonerate them.
        
               | mLuby wrote:
               | I'd think the attack would be done in reverse:
               | 
               | 1. Get a photo that the target _already_ has.
               | 
               | 2. Generate an objectionable image with the same hash as
               | the target's photo. (This is obviously illegal.)
               | 
               | 3. Submit the objectionable image to the government
               | database.
               | 
               | Now the target's photo will be flagged until manually
               | reviewed.
               | 
               | This doesn't sound impossible as a targeted attack, and
               | if done on a handful of images that millions of people
               | might have saved (popular memes?) it might even grind the
               | manual reviews to a halt. But maybe I'm not understanding
               | something in this (very bad idea) system.
        
               | ChrisKnott wrote:
               | Ok yeah, I do agree this scaling attack potentially makes
               | this feasible, if it essentially allows you to present a
               | completely different image to the reviewer as to the
               | user. Has anyone done this yet? i.e. an image that
               | NeuralHashes to a target hash, and also scale-attacks to
               | a target image, but looks completely different.
               | 
               | (Perhaps I misunderstood your original post, but this
               | seems to be a completely different scenario to the one
               | you originally described with reference to the three
               | thumbnails)
        
               | saithound wrote:
               | Okay, perhaps the three thumbnails was unclear. I didn't
               | mean to illustrate any specific attack with it, just to
               | convey the feeling of why it's difficult to tell apart
               | legal and potentially illegal content based on thumbnails
               | (i.e. why a reviewer would have to click "possible CSAM"
               | even if the thumbnail looks like "vanilla" sexual or
               | violent content that probably depicts adults). I'd splice
               | in a sentence to clarify this, but I can't edit that
               | particular comment anymore.
        
               | gzer0 wrote:
               | Your explanations are brilliant. Thank you
        
             | mrtranscendence wrote:
             | Fair enough. I suppose it's true that you could create a
             | colliding sexually explicit image where age is
             | indeterminate, and the reviewer may not realize it isn't a
             | match.
             | 
             | > Given the ability to produce hash collisions, an
             | adversary can easily generate photos that fail this visual
             | inspection as well.
             | 
             | Apple could easily fix this by also showing a low-res
             | version of the CSAM image that was collided with, but I'll
             | grant that they may not be able to do that legally (and
             | reviewers probably don't want to look at actual CSAM).
        
               | vlovich123 wrote:
               | The problem is that it is a scaled low-res version. There
               | are well publicized attacks[1] showing you can completely
               | change the contents of the image post scaling. There's
               | also the added problem that if the scaled down image is
               | small, even without the attack, it's impossible to make a
               | reasonable human judgement call (as OP points out).
               | 
               | The problem isn't CSAM scanning in principle. The problem
               | is that the shift to the client & the various privacy-
               | preserving steps Apple is attempting to make is actually
               | making the actions taken in response to a match different
               | in a concerning way. One big problem isn't the cases
               | where the authorities should investigate*, but that a
               | malicious actor can act surreptitiously and leave behind
               | almost no footprint of the attack. Given SWATting is a
               | real thing, imagine how it plays out if child pornography
               | is a thing. From the authorities perspective SWATting is
               | low incidence & not that big a deal. Very different
               | perspective on the victim side though.
               | 
               | [1] https://embracethered.com/blog/posts/2020/husky-ai-
               | image-res...
               | 
               | * One could argue about the civil liberties aspect & the
               | fact that having CSAM images is not the same as actually
               | abusing children. However, among the general population
               | that line of reasoning just gets you dismissed as
               | supporting child abuse & is only starting to become
               | acknowledged in the psychiatry community.
        
               | throwaway672000 wrote:
               | Won't enough images be real matches for them to be
               | looking at it (in low res) for most of their work day?
        
           | giantrobot wrote:
           | It's entirely possible to alter an image such that its raw
           | form looks different from its scaled form [0]. A government
           | or just well resourced group can take a legitimate CSAM image
           | and modify it such that when scaled for use in the perceptual
           | algorithm(s) it changes to be some politically sensitive
           | image. Upon review it'll look like CSAM so off it goes to
           | reporting agencies.
           | 
           | Because the perceptual hash algorithms are presented as black
           | boxes the image they _perceive_ isn 't audited or reviewed.
           | There's zero recognition of this weakness by Apple or NCMEC
           | (and their equivalents). For the system to even begin to be
           | trustworthy all content would need to be reviewed raw and
           | scaled-as-fed-into-the-algorithm.
           | 
           | [0] https://bdtechtalks.com/2020/08/03/machine-learning-
           | adversar...
        
       ___________________________________________________________________
       (page generated 2021-08-19 23:00 UTC)