[HN Gopher] Coat makes wearers invisible to AI security cameras
       ___________________________________________________________________
        
       Coat makes wearers invisible to AI security cameras
        
       Author : mikece
       Score  : 112 points
       Date   : 2022-12-12 18:04 UTC (4 hours ago)
        
 (HTM) web link (petapixel.com)
 (TXT) w3m dump (petapixel.com)
        
       | ballenf wrote:
       | Next step would be programmable patches of IR emitters/absorbers
       | that change in real time during use.
        
         | jkepler wrote:
         | Or beanie caps that use a IR laser to target and blind
         | surveillance cameras like in Doctorow's "Pirate Cinema" novel.
        
         | jay_kyburz wrote:
         | The final step is we all travel around in the exact same black
         | cube on wheels. Can't get bometrics or your unique gait.
         | 
         | Or just not leave home.
         | 
         | I've always thought that surveillance footage gathered using
         | our tax money should be available to everybody. I thought if
         | everybody can access them, they would understand how we exposed
         | we all are as soon as you leave the house.
        
       | [deleted]
        
       | meindnoch wrote:
       | Cool! I'm gonna add it to the training set.
        
       | IndySun wrote:
       | I was surprised by this part of the story... "The coat won first
       | prize in a contest sponsored by Huawei Technologies". The same
       | Huawei between a rock and a hard place (on paper) i.e., much
       | feared in the West, a success story for China. And to evade the
       | AI security of both?
        
         | jay_kyburz wrote:
         | If you were making tech to track people, how would you
         | incentivize people to think outside the box and come up with
         | ways to circumnavigate your tech?
         | 
         | You run a competition and test your tech against the entries.
         | 
         | When your tech fails, know what you need to fix.
        
           | IndySun wrote:
           | >...run a competition and test your tech...
           | 
           | In a typical capitalistic, privately owned and run company,
           | yes.
        
       | superchroma wrote:
       | I am inclined to treat "research" into gimmicks that undermine
       | assumptions made by one particular trained AI as attention
       | seeking spam by this point. There is no through-line from this
       | discovery to a greater outcome, and we just see this article over
       | and over from different groups.
       | 
       | Nevermind the fact that authorities do not disclose what AI
       | implementations they use to detect people in footage.
        
         | dpflan wrote:
         | Pretty clearly this clothing needs to be digitized (electrified
         | fabrics or screen-like or curvable/bendable screens somewhere)
         | to actually useful and update-able. Using last year's design
         | could mean this year you get caught...
         | 
         | A digitized pendant or pin seems to accomplish the same idea
         | and easy to take with you.
        
           | nomel wrote:
           | > A digitized pendant or pin seems to accomplish the same
           | idea and easy to take with you.
           | 
           | Citation needed, for each neural network.
        
           | constantcrying wrote:
           | >A digitized pendant or pin seems to accomplish the same idea
           | and easy to take with you.
           | 
           | They don't even resolve at the distances the camera detect
           | people, what effect could they possibly have?
        
       | rqtwteye wrote:
       | This is a losing battle. Basically they are providing training
       | data for the surveillance companies. It's pretty scary that soon
       | we will have more powerful surveillance than Orwell imagined in
       | "1984".
        
         | acdha wrote:
         | The fact that research was published openly by Chinese students
         | and favorably recognized by Huawei strongly supports that
         | interpretation. That's more or less the polar opposite of what
         | you'd do if you were trying to subvert mass surveillance
         | systems rather than improve them.
        
       | i_like_apis wrote:
       | If you like this there is also https://adversarial-designs.shop/
       | 
       | They sell mugs that show up as dogs/birds/toasters/nothing ...
       | shirts that are stop signs, stickers that are toasters, etc.
       | 
       | Great secret santa gifts or stocking stuffers for ML nerds.
       | 
       | No affiliation, I just think it's cool :)
        
         | mxuribe wrote:
         | These seem a tad pricey, but very, very cool!
        
         | bogwog wrote:
         | If I wear that on the side walk, and confuse a self driving car
         | enough that it causes an accident, does that make me an
         | asshole?
        
           | hoosieree wrote:
           | Chaotic neutral application of Postel's law.
        
           | tlavoie wrote:
           | Presumably the self-driving car would do a controlled stop
           | for the fake stop sign, so the asshole should be the human
           | driver who rear-ends them for stopping in the middle of the
           | block.
        
           | jeroenhd wrote:
           | Self driving cars are still required to have a human at the
           | wheel so I don't see much of a problem. Self driving cars
           | need to be
           | 
           | If you get hit yourself, though, don't expect sympathy from
           | the courts or your insurance company.
        
       | wallstprog wrote:
       | See also "Zero History" by William Gibson (2010) (although the
       | "invisible t-shirt" idea apparently came from Bruce Sterling).
        
         | jrd259 wrote:
         | See the scramble suit in Philip K. Dick's A Scanner Darkly
        
       | gnicholas wrote:
       | Seems like it wouldn't take much to update an algorithm so that
       | wearers of these jackets are again identified as people. The next
       | step, of course, would be to flag wearers of these jackets as
       | especially suspicious, since they are trying to evade detection.
        
         | gruez wrote:
         | >The next step, of course, would be to flag wearers of these
         | jackets as especially suspicious, since they are trying to
         | evade detection.
         | 
         | https://xkcd.com/1105/
        
           | omoikane wrote:
           | I have seen something like that:
           | https://photos.app.goo.gl/fMBpenHYzNPg4BsX9
        
           | gnicholas wrote:
           | I recently saw a car with all zeros (or O's?) except the 5th
           | digit, which was a U. It took me forever to be sure what I
           | was seeing.
        
             | tbyehl wrote:
             | I saw a news story about 'NO TAG' being a particularly bad
             | choice in vanity tags, so of course I went to the DMV and
             | got 'N0 TAG'. ~3.5 years later I haven't gotten anyone
             | else's tickets in the mail but I'm pretty sure it scored me
             | a warning on a thoroughly earned stop.
             | 
             | https://i.imgur.com/SAbCH5Z.png
        
         | jgalt212 wrote:
         | sort of like using the Tor Browser.
        
         | anonporridge wrote:
         | Probably, but perhaps at the expense of a huge increase in
         | false positives.
         | 
         | It's a similar reason we humans tend to get scared of rustling
         | in the night, when the majority of the time there's nothing
         | dangerous out there. Because predators are evolved for stealth
         | and camouflage, our brains overcorrect pattern matching to try
         | to detect them, but most of our "detections" are false
         | positives.
        
           | coder543 wrote:
           | > Probably, but perhaps at the expense of a huge increase in
           | false positives.
           | 
           | "Perhaps" is a doing a lot of heavy lifting in your comment,
           | and I find the possible outcome you describe to be _very_
           | unlikely after looking at the pictures in the article.
           | 
           | Personally, I'm confident that these anti-AI patterns don't
           | work consistently across different person detection models,
           | even though the article doesn't even ask the question, let
           | alone dig into the answer.
           | 
           | The article doesn't present independent evidence that these
           | work at all, let alone against more than just a single toy
           | model built for PoC purposes.
           | 
           | It's an idea that gets clicks, and the oddly-specific "$71"
           | (just an unnecessarily specific conversion from 500 yuan)
           | also helps with attracting clicks. This article is basically
           | just clickbait, in my opinion, not anything substantial.
        
             | A4ET8a8uTh0 wrote:
             | As much as I dislike where this conversation is going (
             | edit: not this article; just our privacy expectations in
             | general ), I am inclined to agree. Beyond the obvious, as
             | the cat and mouse game continues, people who want to defeat
             | it will need to account for almost inevitable increased
             | number of algo variants and it is unlikely that:
             | 
             | 1. They are mutually exclusive 2. They can't be run in
             | close succession 3. They are disclosed and known to the
             | person that tries to avoid them
        
             | oofbey wrote:
             | tl;dr: These tricks work pretty reliably against budget AI
             | systems. But not good ones.
             | 
             | The surprising truth is that these camouflage anti-patterns
             | often work across many AI models. It's been a fairly
             | baffling result in many research papers that the same
             | trick-images work regardless of the model, but with an
             | important catch...
             | 
             | The models need to have been trained on the same dataset.
             | If the model was trained on COCO (super common for finding
             | objects in an image), then you can fool it. Since there are
             | a handful of academic datasets that underlie a ton of CV
             | models, these tricks will often work.
             | 
             | But if the AI company used their own dataset to train the
             | model, you can't fool it like this. (Unless you have an
             | insider steal the dataset for you.) So if a company is good
             | enough to come up with their own data, this doesn't work.
        
             | falcolas wrote:
             | This is based (or independently developed alongside) CV
             | Dazzle, which makes it harder for algorithms to identify
             | "human" features by obscuring the edges they rely upon.
             | 
             | The original Dazzle camouflage was effective against human
             | eyes too, so there's a pretty high bar for making an
             | algorithm dazzle-proof.
        
               | coder543 wrote:
               | This is not doing anything to the edges, and it does not
               | make it harder for me to see the person on the left, at
               | all: https://petapixel.com/assets/uploads/2022/12/cad42d4
               | d39741ca...
               | 
               | Additionally, most "cvdazzle" results on Google images
               | are trying to obscure the face, not the existence of a
               | person. This research is apparently focused on preventing
               | a person from being detected, not obscuring their face
               | with weird patterns. Even then, the "cvdazzle" stuff that
               | I'm seeing does not make it harder for me to tell there's
               | a person there. It has the same effect of obscuring
               | identity as a ski mask.
        
               | falcolas wrote:
               | I'm referring more to the jackets with the dark IR spots.
               | The one you've linked is definitely using a different
               | weakness of the CV they're using.
        
               | coder543 wrote:
               | >>> The original Dazzle camouflage was effective against
               | human eyes too
               | 
               | > I'm referring more to the jackets with the dark IR
               | spots.
               | 
               | Can your cite your source? Googling for "cvdazzle jacket"
               | turns up nothing. "Dazzle jacket" just turns up a bunch
               | of fashion stuff.
               | 
               | Plus, nothing I've seen from the article -- including the
               | dark IR spots jacket -- is that difficult to identify as
               | a human, so the bar doesn't seem that high.
        
               | falcolas wrote:
               | https://cvdazzle.com/
               | 
               | https://en.wikipedia.org/wiki/Dazzle_camouflage
        
               | coder543 wrote:
               | Once again, cvdazzle seems focused on obscuring a human
               | face, not obscuring the existence of a human, and I don't
               | see how it is more effective than a ski mask.
               | 
               | A choice quote from your cvdazzle link:
               | 
               | > This face is unrecognizable to the Viola-Jones Haar
               | Cascade face detection algorithm. (It does not apply to
               | DCNN face detectors)
               | 
               | So... modern face detectors don't even have trouble with
               | cvdazzle. All four of the detectors in this sample
               | correctly identify the cvdazzled subject from the
               | cvdazzle link: https://huggingface.co/spaces/celebrate-
               | ai/face-detection-cn...
               | 
               | I'll also add a few choice quotes from Wikipedia:
               | 
               | > Unlike other forms of camouflage, the intention of
               | dazzle is not to conceal but to make it difficult to
               | estimate a target's range, speed, and heading.
               | 
               | > The result was that a profusion of dazzle schemes was
               | tried, and the evidence for their success was, at best,
               | mixed.
               | 
               | So, no, dazzle camo _does not_ seem to have a record of
               | being effective against either humans or cameras, so the
               | bar is low to start with, not  "pretty high" at all. But,
               | the goal here is also concealment, not obscuring range,
               | speed, or heading, which dazzle camo only had "mixed
               | success" for, and dazzle camo was never designed for
               | concealment at all.
               | 
               | In either case, I'm not talking about hiding a ship on
               | the horizon. I'm talking about the effectiveness of this
               | for hiding a human walking in front of a camera.
               | 
               | What was the goal here? Dazzle camo seems like it was
               | never proven to be that useful, according to wikipedia,
               | and cvdazzle is obsolete according to its own website and
               | a quick test that anyone can perform. As I said from the
               | beginning, the article OP linked appears to be nothing
               | more than clickbait. That $71 coat is not a general
               | solution to AI surveillance, and training a machine
               | learning model to detect it would not make that model
               | suddenly overwhelmed with false positives.
        
         | ignoramous wrote:
         | > _Seems like it wouldn 't take much to update an algorithm so
         | that wearers of these jackets are again identified as people._
         | 
         | Or worse, shell such coat wearers with automatic gunfire:
         | https://www.telegraph.co.uk/world-news/2022/09/26/israel-pil...
        
       | kortex wrote:
       | I experimented with these sorts of adversarial patterns a few
       | years back. It was straightforward to develop a dazzle pattern
       | which messed with your bog-standard Resnet, but they didn't
       | remotely generalize well. Just using a different architecture or
       | sometimes even a different pre-trained model was enough to thwart
       | it.
        
         | [deleted]
        
         | feet wrote:
         | That's the thing I don't get, why are people making such a big
         | deal about these adversarial patterns when they might not even
         | work on any models that are actually in use?
        
       | stuckinhell wrote:
       | Until the algorithm gets updated.
       | 
       | The AI is evolving faster everyday. I work on some ai adjacent
       | work, and we have translation and drawing ai internal service
       | that are basically multiple AI's working together seamlessly.
       | Ideally to automate the localization of one our products across
       | 150 countries, even it boosts our productivity 20% it's going to
       | be a huge win.
        
       | commieneko wrote:
       | Reminds me of the "ugly t-shirt" in William Gibson's _Zero
       | History_.
        
       | constantcrying wrote:
       | You can usually beat adversarial examples by training against
       | them, which makes the whole thing a cat and mouse game, which in
       | the end only strengthens the AI.
        
       | brk wrote:
       | This wouldn't do anything useful in real-world scenarios. Source:
       | I've been heavily involved in AI security cameras for the last
       | decade+.
        
         | Steltek wrote:
         | So... how does this fail and what would be effective these
         | days?
        
           | bfeynman wrote:
           | these fail for a variety of reasons. The dead simple answer
           | is that people who are trying to sell these are doing it off
           | of a gimmick because to truly do this as the commenter
           | suggested is nearly impossible if you are trying to offer an
           | actual solution. Compression often adds noise that is hard to
           | capture, the patterns can change on model updates, you can
           | easily update models to capture these sequences. It relies on
           | both a dumb model and dumb consumer for this work.
        
           | brk wrote:
           | In short, there are too many different kinds of approaches to
           | AI in surveillance cameras to have a universally effective
           | camouflage.
           | 
           | In many cases there is simply not enough pixels on target for
           | these patterns to render in a way that makes them at all
           | distinguishable. You also have to account for the fact that
           | you may wind up viewing a person from any range of a 360
           | arc/angle, so the pattern would need to be around the entire
           | jacket.
           | 
           | Most algorithms are looking for more of an overall target
           | size and proportions, plus things like target location
           | relative to an artificial horizon or ground plane.
           | 
           | In some cases, this might work to reduce the overall
           | classification confidence, but is unlikely to truly make the
           | person "invisible".
           | 
           | Also, thermal cameras are hardly used anymore. They have been
           | stuck at relatively low resolutions (D1 / 640x480), and
           | modern sensors have really good low-light imaging. Because
           | thermal cameras are still very costly, and because they never
           | produce a "good" image with any identifiable detail, they
           | have become really really rarely used overall. Even so, a few
           | patches on a jacket that show small regions of high thermal
           | contrast are unlikely to fool any systems.
           | 
           | I doubt that these researchers had access to current state of
           | the art perimeter protection analytics products. The most
           | likely tested on lower end easily available consumer based
           | products.
           | 
           | It is hard to say what would be effective overall that is
           | practical. Many systems ultimately fail on people crawling,
           | some will detect this, but often at the trade off of many
           | false alarms, so it is usually not enabled. However, crawling
           | around is not really that practical.
           | 
           | Large groups of people moving very closely together are
           | harder to detect, particularly if they are all dressed very
           | similarly. But, I wouldn't call this a reliable evasion
           | technique.
        
       | runemadsen wrote:
       | A much more thorough investigation of these ideas was done by
       | Adam Harvey 11 years ago: https://ahprojects.com/cvdazzle/
        
       | WheelsAtLarge wrote:
       | I've heard that an infrared light will make an object invisible
       | to a camera. Anyone know if it's true? That would, surely, make
       | sure it's invisible to the camera and the AI.
       | 
       | Note:
       | 
       | Got my answer: https://www.wikihow.com/Blind-a-Surveillance-
       | Camera
       | 
       | It's no very practical.
        
         | mindcrime wrote:
         | The thing is, the answer is really "sort of" or even "maybe".
         | Some of it depends on the camera in question. As I recall the
         | genesis of this idea was rooted in the fact that at a certain
         | point in time, many security cameras (especially cheaper ones)
         | had little or no IR filtering, and a bright IR light source
         | could cause a "flare" effect that would effectively mask other
         | parts of the field of view. But it's hard to know in advance if
         | this will or will not apply to any given camera.
         | 
         | Also, it might be able to, for example, hide your face or mask
         | your car's license plate... but it doesn't make you
         | _invisible_. In fact, just the opposite... using this technique
         | makes you acutely visible, but just (if all goes well)
         | unrecognizable. Or if you masked the entire field of vision,
         | you might be effectively  "invisible" but it would be obvious
         | to anyone watching the camera output that something weird is
         | happening. So you'd be making yourself conspicuous if there's a
         | live operator watching.
         | 
         | So yeah... it does kinda work, but definitely of questionable
         | (but probably non-zero) practicality.
        
           | brk wrote:
           | Actually, all security cameras tends to have an IR filter by
           | default, otherwise colors will be off.
           | 
           | Better surveillance cameras will have a "movable cut filter",
           | meaning a mechanism to remove the IR filter from the light
           | path to the sensor to allow for better low-light images. In
           | this mode, the camera reverts to black and white images so
           | you don't get the color shift from the ambient IR light.
           | 
           | Using some average 5mm IR LEDs in a flashlight setup during
           | daylight hours would do nothing most of the time. At night
           | you might be able to cause problems with some cheaper
           | cameras, better units with good Wide Dynamic Range
           | specifications would be able to handle most of these kinds of
           | disruptor devices. You'd need some really powerful IR LEDs,
           | like an array of OSRAM IR LEDs (https://ams-
           | osram.com/products/leds/ir-leds) to create a strong IR
           | floodlight that would cause the camera to be blown out. Also
           | it is common these days for cameras to send alerts on
           | problems with massive image disruption, so you'd have to hope
           | you're trying to disrupt a very cheap system with nobody
           | receiving event notifications (which is admittedly still very
           | common).
        
       | pcurve wrote:
       | I wonder if there would be repercussion from the Chinese
       | government for wearing this. The fact that people would attempt
       | to create something like this and publicize its finding means
       | maybe there's a bit of hope for more free China?
        
         | constantcrying wrote:
         | >I wonder if there would be repercussion from the Chinese
         | government for wearing this.
         | 
         | Only if they are really stupid. Adversarial attacks are easily
         | beatable and if anything this only improves the AI. These
         | attacks exploit the specific structure and training of a neural
         | network, they do not make you "invisible to AI".
        
           | coder543 wrote:
           | You're being downvoted, but I have no idea why. You're
           | completely correct.
           | 
           | Anyone who believes this $71 coat will make them "invisible
           | to AI" doesn't know how machine learning works.
        
           | 988747 wrote:
           | What you really want for avoiding security cameras is more
           | high-tech solution (and I know I might be crossing into
           | science-fiction territory here):
           | 
           | A device that looks like forehead flashlight, but actually
           | has a camera and some computer vision AI (or some other way
           | of detecting security cameras), and a laser beam that it can
           | use to blind those cameras.
           | 
           | Or you need something like this: https://futurism.com/the-
           | byte/watch-invisibility-cloak-milit...
        
       | randcraw wrote:
       | These coats should change the pattern they generate every second,
       | like Rorschach from Watchmen. That'd not only be cool, but should
       | guarantee anonymity since the current photo of you doesn't match
       | the last photo of you.
        
         | vorpalhex wrote:
         | Just need a giant soft e ink display.
        
       | jameshart wrote:
       | I'm curious whether $71 is supposed to be 'surprisingly cheap'
       | for a surveillance-defeating measure, or 'surprisingly expensive'
       | for a coat.
        
         | SEJeff wrote:
         | A really good north face winter jacket (The kind you'll want in
         | somewhere like Canada, Chicago, Northern US, etc) are
         | $200-$350. A women's stylish (non-winter) coat can be $500+.
         | 
         | So no, $71 is not that expensive for a coat.
        
       | IvyMike wrote:
       | I feel like we need to give a hat tip to Philip K. Dick's
       | Scramble Suit from "A Scanner Darkly".
       | 
       | http://www.technovelgy.com/ct/content.asp?Bnum=997
        
         | falcolas wrote:
         | And a less metaphorical hat tip to CV Dazzle and the original
         | Dazzle camouflage as well.
        
       | wskish wrote:
       | Based on my experience with vision models (my previous company
       | has several thousand models in production), basic CNNs are great
       | at detecting camouflage patterns that they have seen before in
       | training. This type of strategy would have the exact opposite of
       | the intended effect once models were updated with these patterns
       | in the training set.
       | 
       | I guess that is an opportunity for subscription "camouflage as a
       | service".
        
         | chickenpotpie wrote:
         | I think this opens up an interesting cat and mouse game
         | however. Once the model is trained on this shirts, the shirts
         | could just be left around town and all of a sudden everything
         | is a human. Now they have to refine their algorithm even more
         | and the people refine their methods even more and so on and so
         | on until there's too many cases for the model to function
         | effectively.
        
           | coder543 wrote:
           | No, none of this is how ML models work either, unless you're
           | leaving humans in those shirts lying around town.
           | 
           | The model would not be trained against "just shirt" ==
           | "human". It would just be more samples from the surveillance
           | cameras of actual humans walking around being labeled
           | properly. (The _huge_ assumption here is that the shirt
           | actually worked in the first place, which would only happen
           | against a specific model, and the article doesn 't provide
           | any useful insights into anything.)
        
             | chickenpotpie wrote:
             | That's not necessarily true without knowing what model
             | they're using though. If they're doing dimensionality
             | reduction, the model could learn that if the shirt is
             | present, than the human part of the image really isn't
             | important because the presence of the shirt is a 100%
             | accurate indication for people.
        
               | coder543 wrote:
               | Person detection models draw an outline around each
               | person in frame. What you describe would completely break
               | that, _regardless_ of the model.
               | 
               | The model _has_ to keep more of the context or else the
               | bounding rectangles would be all over the place.
        
               | chickenpotpie wrote:
               | The shirt gives a good idea of proportion and it could
               | figure out the size of the rectangle pretty accurately
               | from that
        
               | coder543 wrote:
               | Not really.
               | 
               | If you want to link to some useful examples of
               | dimensionally reduced person detection models that
               | exhibit this behavior, then by all means, but none of
               | this is how any current models I've seen work. It also
               | wouldn't make sense to deploy such a model if it were so
               | easily confused by shirts lying around. That model would
               | be pretty terrible by any standard. If they're using
               | terrible technology, you probably don't need a special
               | shirt anyways.
               | 
               | Teaching a model to notice people walking through the
               | frame regardless of what shirt they're wearing is simply
               | not "a cat and mouse game", assuming they're not
               | intentionally using a terrible model or a terrible
               | dataset.
        
       ___________________________________________________________________
       (page generated 2022-12-12 23:01 UTC)