[HN Gopher] Proposed illegal image detectors on devices are 'eas...
       ___________________________________________________________________
        
       Proposed illegal image detectors on devices are 'easily fooled'
        
       Author : agomez314
       Score  : 78 points
       Date   : 2021-11-10 18:11 UTC (4 hours ago)
        
 (HTM) web link (www.imperial.ac.uk)
 (TXT) w3m dump (www.imperial.ac.uk)
        
       | everyone wrote:
       | "Proposed illegal image detectors"
       | 
       | Meaning the image detectors would be illegal in most countries if
       | actually implemented?
        
         | Ensorceled wrote:
         | In English that can be parsed either way. If you are sincere in
         | your misunderstanding of this grammar I'm going to assume you
         | are a non-English speaker or might have a slight development
         | disorder relate to communication.
        
           | jhgb wrote:
           | This "misunderstanding" may very well be accidentally
           | correct...at least in the EU.
        
       | commandlinefan wrote:
       | And they can easily generate false positives! The worst of both
       | worlds.
        
       | hellojesus wrote:
       | And exactly zero people are surprised that miniscule, random
       | tweaks to images are imperceptible to humans but obviously trash
       | the hash.
        
         | LocalH wrote:
         | But machine learning! Neural nets! Artificial intelligence!
         | 
         | All words that have meaning that becomes diluted in public
         | perception, and lead to things sort of becoming a "black box",
         | where we don't really know how the models actually do what they
         | do.
        
           | amelius wrote:
           | Most machine learning models get it right only in 80% to 90%
           | of cases. So if you have millions of users each with
           | thousands of pictures, you can see how often you run into
           | problems. Even with 99% accuracy, the amount of problems is
           | enormous.
        
         | uoaei wrote:
         | Lots of people would be surprised. Because they aren't educated
         | about how these things work.
         | 
         | I'd wager that project managers working on these teams also
         | don't have the understanding (let alone intuition) to judge
         | whether these risks are present, and so will continue fighting
         | for it. The wager is based on the presumption that someone who
         | did understand these risks would pivot or otherwise not allow
         | this initiative to continue.
         | 
         | PMs fight so hard because from their position their job is at
         | risk if the project takes a nosedive. But they don't know what
         | they're advocating for a lot of the time.
         | 
         | I know quite a few folks like this in FAANG and adjacent
         | spaces.
        
           | smcl wrote:
           | Hell even the author of the linked article doesn't seem to
           | understand what hashing is. Underneath a collection of
           | original and modified images they've put
           | 
           | "These images have been hashed, so that they look different
           | to detection algorithms but nearly identical to us"
        
       | joe_the_user wrote:
       | This seems very obvious. Neural networks are the "best" image
       | detectors we have. It's documented that they can be easily
       | fooled.
       | 
       | The insidious thing is that this can used as a pretext to make
       | the filters flag more images since this would seem to make it
       | "harder to hide illegal images", until Apple just personally
       | scans everything.
        
       | IshKebab wrote:
       | > in its current form, so-called perceptual hashing based client-
       | side scanning (PH-CSS) algorithms will not be a 'magic bullet'
       | for detecting illegal content like CSAM on personal devices
       | 
       | Whoever said it would be a magic bullet?
       | 
       | > The researchers say this highlights just how easily people with
       | illegal material could fool the surveillance.
       | 
       | Ha yes, all you need to do is go to a university computing
       | department and ask them to research algorithms to fool the
       | scanner, and then turn it into an easy-to-use app.
       | 
       | Then you chance it with some real CP images. It might work! Or
       | not.
       | 
       | Interesting but I don't think anyone at Apple will be shocked by
       | this.
        
         | aidenn0 wrote:
         | Plus, my opinion has been for some time that the point of
         | Apple's image-scanning (for Apple) isn't to detect harmful
         | material in iCloud, it's to project an image of not wanting the
         | harmful material stored in iCloud.
         | 
         | Apple has a vested interest in preventing iCloud from becoming
         | "That image storage place for pedophiles."
         | 
         | Also, I'm sure that the FBI is probably pretty okay with "we
         | only catch the people who ever, in a single instance, forget to
         | filter their images through this program that fools the
         | filters" since that's probably like 99% of all people.
        
         | marcellus23 wrote:
         | And then even if it does work and flags enough images to get
         | over the threshold, a human will then review them and
         | immediately notice they're not CP.
        
           | na85 wrote:
           | Seems like it would be pretty easy to ruin someone's life by
           | texting them images of your cat, except they get flagged by
           | this algorithm.
        
             | marcellus23 wrote:
             | No, I think you don't know how the proposed CSAM scanning
             | works:
             | 
             | 1. It's only done on cloud photos. That person would have
             | to manually save each photo to their photos library (with
             | iCloud photos turned on) for the scanning to occur.
             | 
             | 2. After reaching a threshold # of photos automatically
             | flagged, a human reviewer has to confirm that the photos
             | are illegal before any action is taken.
             | 
             | So just texting someone even real CP won't do anything.
        
               | na85 wrote:
               | I was under the impression that iMessage does its own
               | scanning as well. I could be wrong; I don't use an
               | iphone.
        
       | kristjansson wrote:
       | Notably, none of the the algorithms tested in the cited study are
       | Apple's NeuralHash, or comparable algorithms. They look at aHash,
       | pHash (plus a variant thereof), dHash, and and PDQ (used at
       | Facebook for similar applications, apparently). The first 4 date
       | to between 2004 and 2010; the last is more recent, but
       | conceptually similar - the citation [0] for PDQ puts it in the
       | same bucket of 'shallower, stricter, cheaper, faster' algorithms
       | as the first four.
       | 
       | No one has proposed any of those as 'illegal image detectors'.
       | Apple's NeuralHash may or may not be robust to the same or
       | different perturbations but the cited study provides basically no
       | new information to inform the conversation its press release
       | wants to be a part of.
       | 
       | [0]:
       | https://github.com/facebook/ThreatExchange/blob/main/hashing...
        
       ___________________________________________________________________
       (page generated 2021-11-10 23:00 UTC)