[HN Gopher] Adversarial image attacks are no joke
___________________________________________________________________
Adversarial image attacks are no joke
Author : Hard_Space
Score : 134 points
Date : 2021-11-29 14:55 UTC (8 hours ago)
(HTM) web link (www.unite.ai)
(TXT) w3m dump (www.unite.ai)
| draw_down wrote:
| The "adversarial" language around this feels like blame-shifting.
| If your algorithm thinks a flower is Barack Obama, it's not the
| flower's fault.
| laura_g wrote:
| The literature has pretty consistently shown that adversarial
| examples can be found with only black box access (even with
| truncated prediction vectors), robustness methods are primarily a
| cat-and-mouse game between attackers and defenders, and the
| existence of adversarial examples is likely inevitable
| (https://arxiv.org/pdf/1809.02104.pdf).
|
| The big question that remains is - so what? There's exceedingly
| few use cases where the existence of adversarial examples causes
| a security threat. There's a lot of research value in
| understanding adversarial examples and what that tells us about
| how models learn, generalize, and retain information, but I am
| not convinced that these attacks pose a threat remotely close to
| the amount of attention given.
| owlbite wrote:
| Self driving cars seem like a dangerous threat vector if an
| adversarial image can be deployed in such a way as to cause
| them to commit dangerous maneuvers on demand.
| Isinlor wrote:
| There is plenty of natural "adversarial examples" to worry
| about.
|
| Like billboard with stop sign on it.
|
| https://youtu.be/-OdOmU58zOw?t=149
| genewitch wrote:
| I'll be more inclined to start believing that self driving
| / autonomous vehicles are actually "coming soon" when the
| federal government decrees it is illegal to wear clothing
| with certain markings/colors. No red octogons, no
| reflective red and white parts, no yellow vertical stripes,
| etc.
|
| I don't think that "cause an air to fail to stop" is the
| correct threat to address, I think "making AI stop and
| therefore cause traffic" is.
|
| Wake me up when I can have any two arbitrary addresses as
| start and end points and a machine or computer can drive me
| between them, 24/7/365 - barring road closures or whatever.
| laura_g wrote:
| I completely agree, but that's a very big "if". I'm not
| terribly familiar with autonomous vehicle driving systems,
| but my passing understanding is that there are multiple
| components working together that help make predictions, and
| these systems do not rely on any single point of failure.
|
| The classic example of a sticker on a stop sign is, in my
| view, more of a dramatization than a real threat surface.
| Designing an adversarial perturbation on a sticker that can
| cause misclassifications from particular angles and lighting
| conditions is possible, but that alone won't cause a vehicle
| to ignore traffic situations, pedestrians, and other
| contextual information.
|
| Plus, if I wanted to trick a self driving vehicle into not
| stopping at an intersection, it would be much easier and
| cheaper for me to just take the stop sign down :)
| m3kw9 wrote:
| I can propose a non-trivial solution to these problems, that is
| to have a data cleaner average and ignore certain data like how
| humans does it. Humans would ignore everything else but the face
| and maybe the body, and also we don't examine someone's follicles
| either, we basically average.
| adolph wrote:
| "Adversarial" communication with a CV model inference process
| isn't necessarily an attack because it is unintended by the
| humans associated with the process. It is more akin to using the
| full range of an API that uses radiation instead of a network
| port. It could be used to stage a protest by stopping or slowing
| cars on a freeway or call attention to deteriorating
| infrastructure by inducing the car to go over potholes instead of
| avoiding them. Maybe a neighborhood could self-implement traffic
| calming measures that don't apply to emergency vehicles.
| voldacar wrote:
| If you are trying to make my car go over potholes without my
| consent, or in any way do something that I don't want it to,
| that is adversarial behavior. You are my adversary.
| JohnFen wrote:
| I take a large measure of hope from this. I see facial
| recognition as a large societal threat, and it's nice to know
| that a defense is possible.
| SavantIdiot wrote:
| There is a fundamental disconnect between what deep vision models
| can do and what is expected of them. On the one hand, there is a
| very good reason why mean-average-precision is used to assess
| detection-classification models: because even people make
| mistakes. On the other hand, we need to apply the use of these
| forever imperfect models with care, context, and redundancy. This
| is why engineers add a dozen other input types to ADAS systems in
| addition to vision (sonar, lidar, mesh computing, etc). This is
| why regulation is needed, to prevent less rigorous products from
| making their way into situations where the can be easily
| compromised, or worse, deadly.
| p2p_astroturf wrote:
| Ever since I had the misfortune of learning about hacker kids
| wanting self-driving cars, I've been saying you can literally put
| a poster on the side of the road and every car that comes by it
| will crash. Seems like I'm on the right track. Software has edge
| cases. Every software engineer knows this.
|
| >The second-most frequent complaint is that the adversarial image
| attack is 'white box', meaning that you would need direct access
| to the training environment or data.
|
| The training data will be leaked. Companies are very bad at
| classifying what is and isn't private information that they need
| to keep secret. But anyway you probably don't even need the
| training data.
| igorkraw wrote:
| I work in this field, I have a project specifically on
| adversarial examples and I have a strong opinion on this. I
| personally think worrying about adversarial examples in real life
| production systems is like worrying about getting the vanilla
| linux kernel to perform RT critical tasks. It is _fundamentally_
| not a burden you should put on that one component alone and is a
| problem you can _only_ solve with a system approach. And if you
| do that, it is for all practical purposes already solved: apply
| multiple, random perturbations to the input, project your
| perturbed version onto a known,safe image space, and establish
| consensus. [1] is a work from my university which I like to point
| towards. Yes this lower accuracy, yes you won 't be able to do
| critical things with this anymore but that's the price you pay
| for safety. Not getting hyped about CNNs and adopting a fail-safe
| approach that is only augmented with NNs is (in my humble
| opinion) why Waymo has 30k miles between disengagements [2] now
| while Tesla is either going to make me eat this post (not
| impossible given Andrej Karpathy is much smarter than me) OR are
| trying to hide the fact that they will never have anything
| resembling FSD by avoiding to report numbers.
|
| [3] is another paper I recommend for anyone wanting to USE CNNs
| for applications and wants to calmly assess the risk associated
| with adversarial examples
|
| Now, from a _research_ perspective they are fascinating, they
| highlight weaknesses in our ability to train models,are a
| valuable tool to train robust CV models in the low data regime
| and have paved the way towards understanding the types of
| features learned in CNNs (our neighbours just released this [4]
| which in my eyes debunked a previously held assumptions that CNNs
| have a bias towards high frequency features, which is a
| fascinating result).
|
| But for anyone wanting to use the models, you shouldn't worry
| about them because you shouldn't be using the models for anything
| critical in a place where an attack can happen _anyway_. The same
| way that "what is the best way to encrypt our users passwords so
| they cannot be stolen" is the wrong way to approach passwords
| "how can we make the deep neural network in the application
| critical path robust against targeted attack" is (for now) the
| wrong way to approach CV.
|
| [1] https://arxiv.org/abs/1802.06806
|
| [2]
| https://www.forbes.com/sites/bradtempleton/2021/02/09/califo...
|
| [3]https://arxiv.org/abs/1807.06732
|
| [4]
| https://proceedings.neurips.cc/paper/2020/hash/1ea97de85eb63...
| OldHand2018 wrote:
| I see a completely different attack vector here.
|
| Lawyers.
|
| If you are _selling_ a product or service that has been trained
| on a dataset that contains copyrighted photos you don 't have
| permission to use and I can "prove it" enough to get you into
| court and into the discovery phase, you are screwed. I'll get an
| injunction that shuts you down while we talk about how much money
| you have to pay me. And lol, if any of those photos of faces was
| taken in Illinois, we're going to get the class-action lawyers
| involved, or bury you with a ton of individual suits from
| thousands of people.
|
| That link at the bottom about a "safe harbor" you get from using
| old datasets from the Wild West is not going to fly when you
| start _selling_.
| KingMachiavelli wrote:
| IIRC simply training on copyrighted material is completely fine
| or at least you can claim fair use. As long as the market of
| the copyrighted material is not 'AI data training set' then it
| should be OK. Essentially scraping images from the internet is
| OK but using a pirated copyrighted commercial AI data training
| set is not. (Fair use doesn't necessarily exclude use for a
| commercial/sold product.)
|
| But if the AI model just spits out copyrighted material
| verbatim then that is still owned by the actual copyright
| holder.
| NoGravitas wrote:
| I dunno, Microsoft seem to think they can get away with
| training autocomplete on copyrighted source code that they
| don't have permission to use.
| laura_g wrote:
| This would be membership inference attacks -
| https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7958568...
| OldHand2018 wrote:
| Oh excellent. But of course the key addition is handing off
| this information to lawyers who use it to shut you down
| and/or extract money from you.
|
| If you are using some torrent of a dataset, nobody is
| indemnifying you, and once you get to the discovery phase of
| a lawsuit, they are going to know that you intentionally
| grabbed a dataset you knew you shouldn't have had access to.
| Treble damages!
| mgraczyk wrote:
| As somebody who works on computer vision, my general take on
| these things is that adversarial examples are like poison.
|
| It would be fairly easy to add poison to a water supply or the
| air intake of a large building and kill a large number of people.
| This rarely happens though.
|
| It's ok that water sources, buildings, and people aren't
| completely immune to poison. The safety requirement isn't that
| poison can't hurt. Instead, we rely on weaker protections. We try
| to make known poisons hard to make, we try to track people who
| could make them, and we try to make it hard to deliver poison.
|
| I believe the same will be true of adversarial examples for
| vision (and language) models. We can try to make them hard to
| make, hard to posses anonymously, and hard to deliver. I think
| this will be much easier with computer vision than with poison,
| so I'm not worried about it.
|
| For example, consider the case of pasting a sticker on a speed
| limit sign that causes Teslas to swerve off the road. Governments
| should protect people from this in multiple ways, similarly to
| how they protect us from poison: 1. People who
| post these stickers should go to prison. 2. People who
| create and distribute these stickers knowing their purpose should
| go to prison. 3. Tesla should be civilly liable for cases
| where preventing such an incident was possible with known
| technology. 4. Roads should be modified over time to make
| it more difficult to do this attack.
|
| I think some combination of the above would be enough to make
| society as comfortable with adversarial example risk as we are
| with poison risk.
| pixelgeek wrote:
| > For example, consider the case of pasting a sticker on a
| speed limit sign that causes Teslas to swerve off the road.
|
| If your vision system can be caused to swerve off a road by a
| sticker then maybe it shouldn't be used?
| mgraczyk wrote:
| I bet I could cause a significant fraction of human vision
| systems to get in a crash with a well placed sticker. I'd
| replace "<- One Way" with "Detour ->".
| jancsika wrote:
| I see this kind of argument with blockchain bros as well,
| and it drives me nuts.
|
| If I write crappy paint program and all I can claim is,
| "It's no worse than the time/effort of drawing by hand,"
| what exactly have I achieved in your opinion?
|
| And if the posts on HN wrt blockchain and ML constantly
| feature these "no-worse-than-what-we-are-replacing"
| arguments while posts about, say, paint programs don't,
| what does that say about the buzz around blockchain and ML?
|
| Edit: clarification
| mannykannot wrote:
| These attempts to imply a broad equivalence between current
| machine vision and human capabilities do not hold up under
| a modicum of scrutiny.
|
| Humans have well-developed models of how things should be,
| can detect when things seem wrong, and come up with ways to
| address the apparent anomaly (including taking steps to
| investigate and evaluate the situation.)
|
| Humans do not always use these capabilities well, but they
| have them, while similar capabilities are at best
| rudimentary and fragile in current AI. The premise of this
| article is that these capabilities will not come easy.
| pixelgeek wrote:
| But that is not a vision issue. That is providing people
| with incorrect information.
| b3morales wrote:
| The specific trick doesn't really matter; the point is
| that it's possible to maliciously create a situation that
| makes human pilots act dangerously. We accept that the
| possibility can't be made nil, and we have post facto
| rules to deal with it. The same principle applies to
| traps for machines.
| ClumsyPilot wrote:
| Nope, it 0.1% of humans crash but 100% of teslas crash
| that's not 'the same'
| qw501428 wrote:
| Perhaps a mirror/reflective sticker that blinds drivers
| near a sharp curve?
| marcosdumay wrote:
| I'm pretty sure this would fail to kill people on almost
| every place you could try it. And if it works somewhere,
| it's because there are other problems with the road that
| should be fixed.
|
| Human driving is full of redundancies, and there is a clear
| hierarchy of information. People will not rush into a road
| full of cars going on the other way, it doesn't matter what
| the signs say.
|
| If your automated driving system doesn't have those same
| features, it's not ready for use.
| inetknght wrote:
| > _People will not rush into a road full of cars going on
| the other way, it doesn 't matter what the signs say._
|
| You might want to watch the one-way roads in big cities.
| It happens a lot more often than you assume.
|
| It also is (usually) self-correcting: oncoming traffic
| will honk, stop, or move around. The offender will
| (usually) realize their mistake and try to correct.
|
| Sometimes, though, that's not enough. Searching "killed
| in wrong way one way" on DDG (or assumably Google) yields
| many (!) news stories.
| AnimalMuppet wrote:
| Been there, done that (except the "killed" part). It was
| in a heavy fog. I was doing well to find a street _at
| all_ , and it turned out to be one way the wrong way (the
| only such street in town). I figured it out when I saw
| wall-to-wall headlights coming at me out of the fog, and
| made a fast move for the curb...
|
| So, yeah. People react. Which brings up the question: How
| well do self-driving AIs respond to a wrong-way driver?
| How well do self-driving AIs recover when _they_ are the
| wrong-way driver, and they suddenly have enough data to
| realize that?
| Ozzie_osman wrote:
| You put too much faith in humans. Things like stop sign
| removal have caused deaths in the past.
| https://www.nytimes.com/1997/06/21/us/3-are-sentenced-
| to-15-...
| Vetch wrote:
| Humans are not so bad as drivers. Your example is an
| event from over 2 decades ago and was deemed newsworthy.
| Humans drive in all kinds of conditions but death rate is
| about 1 per 100 million miles driven. A search reveals
| crashes to be on the order of hundreds of collisions per
| 100 million miles driven. Age, country, intoxication
| level, road design and laws, road and environmental
| conditions also play a major role such that accident
| rates for someone aged 30+ in a Northern European country
| are going to be a lot less than teenagers in a country
| where road laws are merely friendly suggestions (and
| considering the chaos of driving in those countries, the
| rates are actually surprisingly low).
| catlikesshrimp wrote:
| I will go further and say that almost everytime there is
| an accident the driver is somehow impaired. Lack of
| sleep, drugs, illness (old age included, mental disease),
| poor judgement (young age included, emotional distress)
|
| Humans are surprisingly good at driving under normal
| conditions.
| hermitdev wrote:
| Some are. Some are not. Last week, I was nearly in two
| accidents on maybe a 1 mile trip to the store from my
| house. Both times were people pulling out of traffic,
| ignoring right of way. _I_ prevented the accidents that
| would have resulted from these two separate idiots. I
| have also been in over 20 accidents in my 25 years of
| driving, the vast majority of those having been rear-
| ended and none were my fault.
|
| In my experience, I've not been in an accident with a
| teen, nor someone elderly, though I know people that have
| (both causing and being involved). Neither have I been in
| an accident with someone that I could tell was impaired
| by drugs or alcohol. I don't know for sure any of them
| involved a phone for that matter. Weather was only a
| factor in one accident (pouring rain, low visibility).
|
| I have nothing to suggest that any of my accidents were
| caused by anything other than inattentiveness, even the
| one time weather played a minor role. I also see a lot of
| dangerous behavior every time I drive: people running
| lights and stop signs, completely ignoring yield signs
| (seriously, they must be invisible to everyone else),
| failing to yield right of way, failing to signal turns
| and lane changes (my favorite is turning the signal on
| _after_ moving into the turn lane), lots of phone usage
| (for everything except making a call, from maps to
| texting to watching videos!).
| ummonk wrote:
| > I have also been in over 20 accidents in my 25 years of
| driving, the vast majority of those having been rear-
| ended and none were my fault.
|
| Do you just drive a lot or do you brake too late / too
| hard? Because an accident rate that high is rather
| unusual.
| frenchyatwork wrote:
| > or do you brake too late / too hard
|
| You mean live in a place where drivers tailgate?
| YetAnotherNick wrote:
| You could spray handful of nails in the road and I think
| there is a big chance it would cause an accident. Or you
| could just dig up a hole using tools available in most
| homes. Agreed, it's not that easy, but not hard as well.
| [deleted]
| 323 wrote:
| > People will not rush into a road full of cars going on
| the other way, it doesn't matter what the signs say.
|
| And people would not drive into a river passing through
| multiple barriers, just because their GPS says so.
|
| https://theweek.com/articles/464674/8-drivers-who-
| blindly-fo...
|
| https://indianexpress.com/article/trending/bizarre/driver
| -in...
| bryanrasmussen wrote:
| the claim is not that automated driving systems are ready
| for use, the claim is that if you do things in order to
| compromise a system that has a good chance of killing
| people and then does kill people that should be illegal,
| which of course it already is.
| AnimalMuppet wrote:
| Yeah. "Voluntary manslaughter" and "malicious mischief"
| are already things you can prosecute for.
| siboehm wrote:
| This has always confused me as well. What would be the
| reason why some adversary would choose to craft an
| adversarial example and deploy it in the real world versus
| the much easier solution to just remove / obscure the sign?
| tonyarkles wrote:
| Depending on how big or small it needs to be, potentially
| for subtlety? Especially on current roads that are shared
| by humans and self-driving systems, a human observer will
| immediately notice that something is terribly wrong with
| a replaced sign.
|
| But... around here at least, signs have stickers or
| graffiti on them often enough. Like adding the name of a
| politician under a stop sign: "Stop [Harper]". An
| appropriately made adversarial example won't stick out
| visually the same way that a wholesale sign swap will.
| laura_g wrote:
| Because NeurIPS doesn't publish papers on stop sign
| removal yet :P
| pueblito wrote:
| Warfare comes to mind, as weapons gain increasingly
| powerful ai functions and become autonomous
| beerandt wrote:
| There are multiple reasons for signs to have different
| shapes, sizes, and colors, and this is one of them.
|
| An orange diamond "detour" sign isn't easily confused for a
| smaller rectangle "one way" sign.
|
| Additionally, there should always be two large "do not
| enter" plus two large red "wrong way" signs that are
| visible to a driver from in the intersection before
| turning.
|
| Something as simple as tape or other coverings on an
| existing sign should never result in any confusion as to
| right-of-way for a driver paying attention.
| rictic wrote:
| Some people key off the shape enough that they wouldn't
| follow a wrongly-shaped detour sign, so you wouldn't fool
| everyone, but you'd absolutely fool a lot of people. I
| expect I'd be one of them.
| jstanley wrote:
| I think in almost all cases that would not cause a crash.
| The drivers would see the oncoming traffic and stop rather
| than crash.
| gmadsen wrote:
| that assumes you can see the threat, if instead it led to
| an unprotected crossing at high speed, then you have a
| very different situation
| glitchc wrote:
| I bet you can't. Humans are anti-fragile and can compensate
| with other knowledge.
| GistNoesis wrote:
| Turn a temporary road sign for 30 speed-limit into a 80
| speed-limit with some black-tape (I have already seen it done
| when people were angry to be fined by speed-detector for a
| few excess km/h, (or just for the lulz) ). It probably won't
| fool humans, but it's an edge case that a self-driving car
| may ignore.
| aneutron wrote:
| What you are proposing are what I think would be called a
| security theater.
|
| It gives the illusion of security, but they would absolutely
| not deter a determined threat actor.
|
| The only reason that the water supply isn't poisoned is it's
| unpractical for a single person to conduct the whole exploit
| chain: Construct the poison in enough quantities, gain access
| to facilities supplying the water, and actually throwing the
| compound in it. It's unpractical even for "underground" types.
| Especially the quantities required.
|
| Mathematics and computer science is a different story in my
| opinion. You cannot restrict science or thought. You can try,
| but good luck. The most you can do is delay it. If there is an
| attack that enables someone to flip a Tesla on the road (as
| suggested below), the security theater will hide the attack
| from common folk, but determined actors will reach it
| eventually, and at that point, they can deploy it as they wish.
| And in contrast to the water plant, the logistical endeavor to
| exploit it is absolutely easy in comparison: slap a sticker on
| your vehicle.
|
| Security by obscurity or by theater is rarely a good strategy
| in my opinion. We should absolutely be transparent about these
| kind of things, and allow researchers full access to develop
| attacks against these systems, and effectively communicate when
| they are found.
| tshaddox wrote:
| It's pretty easy for a single person to modify or remove an
| important street sign. Some kids stole traffic signs and were
| convicted of manslaughter when someone ran a (missing) stop
| sign and killed another driver. https://www.washingtonpost.co
| m/archive/politics/1997/06/21/3...
| gh0std3v wrote:
| > What you are proposing are what I think would be called a
| security theater.
|
| I don't think putting people to prison for, say, flipping a
| Tesla by screwing with its computer vision algorithm is
| security theatre. Rather, it's accountability. I'm pretty
| sure most people are aware that you cannot stop a determined
| attacker from breaking a system (which is exactly why Spectre
| mitigations were implemented as soon as the vulnerability was
| discovered: it's hard to exploit, but still possible).
|
| Defining a legal code for exploiting computer systems through
| their hardware or their software is not security theatre,
| it's to ensure that we have a system to punish crime.
| xyzzyz wrote:
| _Construct the poison in enough quantities, gain access to
| facilities supplying the water, and actually throwing the
| compound in it._
|
| Gaining access is rather easy. You can easily fly drones over
| most of reservoirs and dump whatever you want into them.
| Making strong poisons is also relatively easy, eg.
| dimethylmercury can be easily synthesized by any chemistry
| graduate.
| jcims wrote:
| You can just pump it back into the municipal water supply
| from the comfort of your own home (or better yet, someone
| else's). You may need to work around a backflow preventer
| but that's not too difficult.
| mgraczyk wrote:
| Right, and the protections for poison are also security
| theater for the same reason. In the real world that's ok.
|
| > The only reason that the water supply isn't poisoned is
| it's unpractical for a single person to conduct the whole
| exploit chain
|
| It's a quantitative question, just like with computer vision.
| If you don't like the poison example, consider viral DNA,
| which is also dangerous in the right hands and does not
| require massive supply chain control. Not everyone has access
| to a driving dataset like Teslas, and it would be difficult
| to trick a Tesla without such a dataset.
|
| We should allow researches to develop attacks, just like we
| should allow researchers to study poisons, DNA, and viruses.
| eximius wrote:
| > It gives the illusion of security, but they would
| absolutely not deter a determined threat actor.
|
| Sure. And the threat of jail/imprisonment doesn't deter
| determined murderer's. It doesn't mean we shouldn't put
| deterrents.
| p_j_w wrote:
| >It doesn't mean we shouldn't put deterrents.
|
| GP doesn't say we shouldn't, but rather that it's not good
| enough.
| eximius wrote:
| Generally calling something security theatre has an
| implication that it shouldnt be done because of its
| inefficacy and the availability of robust alternatives
| (e.g., port knocking is theatre when we can have robust
| security on known ports with minimal configuration and
| cryptography).
| aneutron wrote:
| While I do agree that security theater does have a
| connotation for things that have no reason to be done, I
| only meant that it's not enough. It's theater in the
| sense that it would only provide a sense of safety, not
| solve the actual underlying issue or vulnerability class.
| ghaff wrote:
| In general, very little is ever enough to completely
| prevent some sort of determined targeted attack,
| especially if the attacker doesn't care whether they're
| caught or not.
| tshaddox wrote:
| Depends what you mean by "not good enough." It's
| obviously not perfect, like all our laws and systems for
| preventing crimes.
| [deleted]
| darepublic wrote:
| So your solution is to create a totalitarian state. So your
| flaky software can be secure. No thanks
| ketzo wrote:
| Relevant XKCD: https://xkcd.com/1958/
|
| > I worry about self-driving car safety features.
|
| > What's to stop someone from painting fake lines on the road,
| or dropping a cutout of a pedestrian onto a highway, to make
| cars swerve and crash?
|
| > Except... those things would also work on human drivers.
| What's stopping people _now_?
|
| > Yeah, causing car crashes isn't hard.
|
| > I guess it's just that most people aren't murderers?
|
| > Oh, right, I always forget.
|
| > An underappreciated component of our road safety system.
| citilife wrote:
| That's how I feel about most dangerous situations in general
| and I think the national news highlights one-off events in a
| way we historically were not used to.
|
| For instance, taking out the United States internet would
| probably only required 3-4 strategic bombings. I bring this
| up because Tennessee had one of those bombed Christmas last
| year -- https://www.theverge.com/2020/12/28/22202822/att-
| outage-nash...
|
| > This brought down wireless and wired networks across parts
| of Tennessee, Kentucky, and Alabama
|
| Most people aren't all that concerned about doing damage.
| Keep people happy and generally you don't have crime.
| yjftsjthsd-h wrote:
| > I believe the same will be true of adversarial examples for
| vision (and language) models. We can try to make them hard to
| make, hard to posses anonymously, and hard to deliver. I think
| this will be much easier with computer vision than with poison,
| so I'm not worried about it.
|
| Erm. We can maybe do something about delivery, but stopping
| people from _making_ (and thus, possessing) them is virtually
| impossible, since all you need is an undergrad-level
| understanding of ML (if that) and some freely-available
| software.
| version_five wrote:
| I lot of this has been touched on already, but I think your
| rules could be reframed a bit to try simplify lawmaking and
| avoid security theatre as was mentioned.
|
| First, I assume it's already illegal to be "adversarial" to
| drivers. A bright light or changing signs etc already do that
| now. For example look at all the laser pointer stuff with
| planes.
|
| Second, I don't think self driving cars are just using the
| softmax output of an object detector as a direct input to car
| control decisions. In the absence of a stop sign, the expected
| behavior would be common sense and caution, the same as if
| someone removed the sign. If the SDC logic is not robust in
| this way, it's not safe for many other reasons.
|
| With this in mind, I think the situation is probably already
| reasonable well covered in existing regulations.
| dogleash wrote:
| There would be support to outlaw adversarial attacks towards
| self-driving cars. As other posters have suggested it probably
| already is illegal, or is a narrow expansion of scope for
| existing laws.
|
| >We can try to make them hard to make, hard to posses
| anonymously, and hard to deliver.
|
| To stretch your own analogy, I have a wide selection of poisons
| at home. Except we call them cleaning products, insecticide and
| automobile fluids.
|
| You can get public support against adversarial attacks on self-
| driving. Except the main use case for computer vision is
| passive surveillance. Good luck on that front.
|
| Oh, and just for funzies, I'll point out the irony that some of
| the people building CV surveillance systems would post on HN
| that regardless of regulation it'll exist no matter what the
| government wants. The argument was that it'd be so hard for the
| government to control CV surveillance, that law wouldn't
| prevent business from creating and using it anyway. When it
| comes to adversarial attacks, it seems more likely to involve
| actions of private individuals rather than businesses, and
| businesses minimize legal risk in a way individual citizens
| don't.
| indymike wrote:
| > People who post these stickers should go to prison
|
| Doing things with intent to harm others is illegal, even if you
| use a sticker to do it.
|
| > Tesla should be civilly liable for cases where preventing
| such an incident was possible with known technology.
|
| This is currently likely the case, but is not proven until a
| lawsuit happens.
| p2p_astroturf wrote:
| Your analysis does not break out of the well known box that is
| the classical ways of analyzing the security of a computer
| system (it actually creeps into DRM/TPM territory which is
| known insecure despite governments with guns). Thus the
| security of "AI" algorithms remains as insecure as it already
| was, and should not be used for anything that needs to be
| secure. If anything, the people who make critical
| infrastructure insecure should go to prison (after education is
| reformed to actually teach these basic problems). Your example
| is like how typical american citizens get their panties in a
| bunch and throw you in jail for 5000 years if you fake your
| identity, but this is only because they have build such
| insecure systems that comeletely break down once this happened.
| And this is yet another thing not fixed by policing. Sorry not
| sorry if I sound rude. You are basically asking me to go to
| jail so you can use some convenient AI consumer tech in lieu of
| proper solutions for stuff like authentication, court systems,
| and car driving (and all the other thing the wackos want to
| replace with AI).
| mgraczyk wrote:
| No, I'm asking you to go to jail if you intentionally try to
| cause somebody to die.
| jimbob45 wrote:
| >It would be fairly easy to add poison to a water supply or the
| air intake of a large building and kill a large number of
| people.
|
| I used to think this until someone walked me through the
| logistics of both and made me realize that you would need an
| agency-alerting level of poison for the water supply and some
| way to avoid people just shutting off the A/C and sticking
| their heads out of windows (also a huge amount of gas). Also
| the news can't exist to alert anyone immediately.
| xboxnolifes wrote:
| > agency-alerting level of poison
|
| Doesn't this fall under: "We try to make known poisons hard
| to make, we try to track people who could make them, and we
| try to make it hard to deliver poison"?
|
| And the rest of it being you should use something odorless /
| tasteless.
| ggoo wrote:
| I think a factor that should also be considered in your analogy
| is that poison is much more difficult to attain than stickers.
| I have to imagine that if poison was cheaply and widely
| available as stickers, we'd have a much larger problem than we
| currently see.
| wcarey wrote:
| Most households contain at least several poisons (and
| precursors to hazardous gasses as well) amongst their
| cleaning supplies.
| [deleted]
| seany wrote:
| 1 and 2 are almost always going to be impossible in the US due
| to the first amendment (this is a feature not a bug)
|
| 3 doesn't seem crazy, but it would practically end up with
| caps, which might not be what you're looking for
|
| 4 This both: seems possible, and will basically never happen
| due to cost in every little jurisdiction
| hacoo wrote:
| I doubt 1 would be protected by the first amendment. It's
| arguably equivalent to spraying graffiti on a stop sign so
| it's unrecognizable.
|
| It would be an extremely difficult to enforce though.
| boomboomsubban wrote:
| Graffiti would just cause people to drive unsafely, in that
| hypothetical the sticker directly causes crashes. It'd be
| something like attempted murder.
| mgraczyk wrote:
| #1 is certainly not a first amendment violation. In fact, the
| supreme court still holds that certain restrictions on
| billboards are allowed even for the purpose of preserving
| beauty. Safety is a much more compelling interest than
| beauty, so I don't expect states and cities will lose their
| ability to regulate road signage.
|
| See Metromedia, Inc. v. San Diego for example.
|
| #2 is expensive and difficult, but that's what we do for
| explosives, poisons, drugs, etc.
| nemo44x wrote:
| Explosives, poisons, drugs aren't speech. Printing the
| chemistry for then is protected speech.
|
| If I wanted to print an image and put it on a t-shirt that
| would trick a computer driven car into doing something if
| its cameras saw my shirt, that's not my problem. The
| barrier to entry is much lower too so I think it's up to
| the engineers to solve it instead of trying to dump the
| hard problems on society.
| boomboomsubban wrote:
| This is like saying "if I set up a movement based
| explosive in a public place, and you just happened to
| walk by it, that's not my problem." Yes it is, you took
| actions that you knew could severely harm people.
| KingMachiavelli wrote:
| No those are completely different. The physical act of
| owning an explosive _can_ be made illegal and _is_. In
| the US the act of owning and expressing an element of
| speech _is_ protected under US law.
|
| You are getting close to something with you second
| statement. There are laws that criminalize _actions_ like
| yelling 'Fire' inside a movie theater or provoking a
| fight (fighting words). Essentially these laws isolate
| the protected 'speech' from a non-speech and therefore
| non-protected 'action'.
|
| However, it would be an extreme stretch to apply or
| expand these to apply to simply wearing a t-shirt. There
| is already plenty of case law that says
| wearing/displaying symbols or profanity is not enough to
| be considered fighting words/act. Heck, in most cases
| just using a racial epithet is not enough to be
| considered fighting words and/or hate speech. [1]
|
| At most you will ever be able to convict is if someone is
| installing these adversarial images on public property
| (e.g street signs). In that case you might be able to use
| the harmful nature/intent of the images to elevate what
| would otherwise be a vandalism charge to assault.
| Essentially there needs to be a distinct and meaningful
| 'action' beyond just wearing/expressing speech.
|
| [1] https://www.msn.com/en-us/news/us/federal-court-
| saying-the-n...
| boomboomsubban wrote:
| > The physical act of owning an explosive can be made
| illegal and is.
|
| Then let me change my example to show legal items being
| used with the intent to cause harm is still illegal. I'm
| free to put razors into candy, but if I hand it out on
| Halloween it'd be illegal.
|
| >However, it would be an extreme stretch to apply or
| expand these to apply to simply wearing a t-shirt. There
| is already plenty of case law that says
| wearing/displaying symbols or profanity is not enough to
| be considered fighting words/act.
|
| This hypothetical T-shirt isn't comparable to fighting
| words, wearing it would unquestionably cause harm to the
| relevant ones who encounter it. Owning or creating it
| might not be a crime, but wearing it in public is
| endangering the public.
| seany wrote:
| Someone could stand holding it in protest of self driving
| cars.
| eximius wrote:
| Defacing property is not free speech...?
| seany wrote:
| Just stand in view holding the image on a poster board.
| glitchc wrote:
| Adversarial examples don't confuse people, only algorithms.
|
| Perhaps you need to face the fact that if the CV algorithm
| fails against these examples when humans don't, then the CV
| algorithm is too brittle and should not be used in the real
| world. I don't trust my life to your "It kinda looks like a
| road, oh wait it's a pylon, I've been tricked, BAM!" dumpster
| fire of an algorithm.
|
| We used to have to craft robustness into algorithms based on
| the false positive rate. Nobody looks at a CFAR style approach
| anymore, and it shows. The state of the art approach of pinning
| everything on ML is a dead-end for CV.
| mabbo wrote:
| > if the CV algorithm fails against these examples when
| humans don't, then the CV algorithm is too brittle and should
| not be used in the real world.
|
| This is the tricky bit.
|
| Night-time driving, bad weather, icy roads, bumper-to-bumper
| traffic: these are all situations in which some algorithms
| can outdo humans in terms of safety. Faster reactions, better
| vision (beyond what human eyes can see), and unlimited
| 'mental stamina' can make a big difference in safe driving.
|
| But then there will be the occasional situation in which the
| CV screws up, and there's an accident. Some of those are ones
| where many/most humans could have handled the situation
| better and avoided the accident.
|
| So how do we decide when the automated car is 'good enough'?
| Do we have to reach a point where in no situation could any
| human have done better? Must it be absolutely better than all
| humans, all the time? Because we may never reach that point.
|
| And all the while, we could be avoiding a lot more accidents
| (and deaths) from situations the AI could have handled.
| glitchc wrote:
| > Night-time driving, bad weather, icy roads, bumper-to-
| bumper traffic: these are all situations in which some
| algorithms can outdo humans in terms of safety. Faster
| reactions, better vision (beyond what human eyes can see),
| and unlimited 'mental stamina' can make a big difference in
| safe driving.
|
| To be clear we are talking about CV which relies on passive
| optical sensing in the visual spectrum through cameras, not
| radar or lidar or IR or multi-spectral sensors.
|
| Within this context, your statement is incorrect. A typical
| camera's dynamic range is orders of magnitude lower than
| the human visual dynamic range. Ergo a camera sees a lot
| less at night compared to a human and what it does see is a
| lot more noisy. Note that this is the input to the
| detection, tracking and classification stages, the ouput of
| which feeds into the control loop(s). It doesn't matter how
| good the control system is, it cannot avoid what the vision
| system cannot see.
| mnd999 wrote:
| That worked super well with DVD CSS right? Let's face it,
| people are going to print adversarial images on t-shirts.
| MikeHolman wrote:
| If the adversarial image is intended to cause car accidents
| or bodily harm in some way, then the people printing the
| t-shirts and the people wearing them are already breaking the
| law.
|
| And if they actually do hurt someone, I imagine they would be
| criminally liable.
| petermcneeley wrote:
| I am pretty sure that deliberately tricking an automated system
| into causing bodily harm is already coved by existing law.
| Think of all the automated systems that have existed before ML.
| rStar wrote:
| > 1. People who post these stickers should go to prison. 2.
| People who create and distribute these stickers knowing their
| purpose should go to prison. 3. Tesla should be civilly liable
| for cases where preventing such an incident was possible with
| known technology. 4. Roads should be modified over time to make
| it more difficult to do this attack.
|
| Translation: everyone else in the universe is responsible for
| solving my problem, and also I am not responsible for solving
| my problem, but i do want to profit from the current state of
| everything being broken all the time, and, i tell my family to
| keep their hands on the wheel
| voakbasda wrote:
| Your proposed laws do not cut out any exemption for research
| and experimentation, either with existing systems or potential
| new ones. This level of regulation would create an impossibly
| high barrier to entry and ensure that only the established
| players would remain in the marketplace. The last thing that I
| want to see is yet more regulatory capture, particularly in an
| industry that has yet to establish a reasonable baseline of
| success.
| mgraczyk wrote:
| None of the things I listed would affect research.
| Researchers shouldn't be posting these on public highways,
| and researchers shouldn't be distributing them with the
| intent to cause harm.
| mrfox321 wrote:
| It would once the govt gets involved. It's like saying that
| weapons research is just a free-for-all. The amount of
| regulation is _correlated_ with the potential harm to
| society.
|
| Look at drug research. There is plenty of red tape that
| hinders it. Although, here, the "harm to society" is
| defined by the nation state.
|
| However, I agree with your proposals in the top-level
| comment.
| Laremere wrote:
| INAL, but actually putting adversarial image attacks on real
| roads is already illegal. If you modify a street sign, and as
| a result someone dies, that's a fairly easy case of
| Involuntary manslaughter.
|
| At a minimum, you can't modify street signs. Eg in Washington
| State: RCW 47.36.130 Meddling with
| signs prohibited. No person shall without lawful
| authority attempt to or in fact _alter_, deface, injure,
| knock down, or remove any official traffic control signal,
| _traffic device_ or railroad sign or signal, or any
| inscription, shield, or insignia thereon, or any other part
| thereof.
|
| (Underscore emphasis added). And if you're thinking about not
| putting it on a sign, but putting it elsewhere visible to
| cars: RCW 46.61.075.1 Display of
| unauthorized signs, signals, or markings. No person
| shall place, maintain or display upon or in view of any
| highway any unauthorized sign, signal, _marking or device_
| which purports to be or is an imitation of or resembles an
| official traffic-control device or railroad sign or signal,
| or _which attempts to direct the movement of traffic_, or
| which hides from view or interferes with the effectiveness of
| an official traffic-control device or any railroad sign or
| signal.
|
| Where I'm unsure is producing these with the intent or
| knowledge that they will/could be used by someone to go do
| this. None of this makes using these for research and
| experimentation illegal.
| lodovic wrote:
| Looks like it will be illegal to wear that shirt with the
| Obama flower image if there is an AI face recognition
| system installed over the highway though.
| ReleaseCandidat wrote:
| > We try to make known poisons hard to make, we try to track
| people who could make them, and we try to make it hard to
| deliver poison.
|
| Actually no. We know that only some psychopaths would do that
| and so the risk is minimal.
|
| AI is currently simply not 'good enough' to be used in critical
| environments. The problem is that _any_ sticker or even dirt or
| snow or ... on any road sign can lead to misinterpretation, you
| can never proof that it's safe.
| mgraczyk wrote:
| Sometime iceberg lettuce kills people (salmonella). We have
| safety regulations and inspections to mitigate that, but you
| can never prove that iceberg lettuce is safe.
| windows2020 wrote:
| Some people think that a self driving car needs to perform just
| well enough to meet some average metric at which point its
| shortcomings are considered justified. Would you drive with a
| human that could fail so spectacularly? As least they can
| rationalize their decision. My opinion is this: we're missing
| something big. Current weak AI strategies may be sufficient in
| some domains, but until the essence of consciousness can be
| defined, general AI (and AI I would trust my life with) is out of
| the question.
| ravi-delia wrote:
| It seems to me theres a difference between attacks that carefully
| craft an image that slips through the cracks, and an attack that
| basically exploits the fact that without context, it's hard to
| figure out what single item is important. If I took a picture of
| a conch shell on top of my keyboard and sent it to someone, no
| one would think I was just showing off my keyboard! They'd
| assume, correctly, that my desk was a mess and I didn't feel like
| finding a clear surface.
|
| That's not to say that either attack is less harmful than the
| other! If you train an image classifier to find bikers, it's not
| really wrong or right to say that a picture of a biker qualifies.
| But if a car stops lest it run over a painted bike on the road,
| that's obviously bad. The problem is that you aren't trying to
| recognize bikers, you're trying to avoid obstacles. We just don't
| train well for that.
| pixelgeek wrote:
| I don't think that the word 'train' should be used for these
| systems. We feed then reams of data and effectively cull the
| ones that don't work but the critical problem is that _we_
| judge the effectiveness of an ML system and we actually do know
| what the ML systems is supposed to be looking for.
|
| We feed a system a series of images of bikes and then select
| the ones that can pick out a bike but we don't know how the
| bike is being chosen. We know it is picking out bikes but we
| have no way to predict if the system is picking out bikes or
| picking out a series of contrasting colour and shadow shapes
| and could easily be thrown off by anything that contains the
| same sort of data.
| NavinF wrote:
| Thank you for an accurate ELI5 description of the human
| visual system. Dunno what this "ML" is, I assume it's some
| part of the brain?
|
| It's too bad you can't analyze brains like you can with
| neural networks. It's trivial to visualize filters and
| feature maps or to create heatmaps showing which pixels
| (shadow shapes?) in a specific image affect the
| classification output and why (contrasting color?).
| josefx wrote:
| The attack also means you can't use a system based on it for
| content filtering unless you get it to reliably identify
| multiple objects in a picture. A picture of a conch shell is
| harmless, a picture of a conch shell and a beheaded person may
| not be.
| corey_moncure wrote:
| This is bad news for safety critical computer vision systems like
| Tesla vision.
| amelius wrote:
| Don't worry, they'll just "convince" the regulators to ignore
| these problems.
| pixelgeek wrote:
| Maybe put a sticker on the conference table that distracts
| them?
| dhosek wrote:
| Whenever these discussions come up, I often think of a time I was
| driving on a two-lane road in rural Ohio in the 90s and at one
| point the center stripe curved into the lane (presumably because
| the driver of the striping truck pulled over without turning off
| the striper) and I started to curve off the road thanks to
| subconscious interpretation of the cue. I caught myself before I
| drove into a corn field, but human vision systems are also
| susceptible to these sorts of problems.
| donkarma wrote:
| You caught yourself, didn't you?
| dontreact wrote:
| If you really wanted to crash cars by altering their visual
| input, why would you bother with all this complexity? Why not
| just actually swap the road sign?
|
| Why does the existence of these attacks change the threat
| landscape at all? If people are already not doing "dumb" attacks
| like just changing/removing road signs why would they start doing
| them?
|
| The risk of messing with road signs and throwing off autonomous
| vehicles really has less to do with adversarial image attacks and
| more to do with envisioning an impractically brittle system where
| the decision to stop is based purely on presence/absence of a
| stop sign and not on a system that has a more general sense of
| collision-avoidance and situational awareness (like humans do).|
|
| Stepping back more generally, I have still never seen a case
| where the undetectability of adversarial attacks actually means
| there is a practical difference to security or safety. If you
| really think through the impact in the real world, usually the
| risk is already there: you can just change the input to the image
| and get bad results, it doesn't affect much that the image is
| imperceptibly changed. Because the whole point of using an
| automated vision system is usually that you want to avoid human
| eyes on the problem.
| goatlover wrote:
| > Why not just actually swap the road sign?
|
| Because you have to physically do it, as opposed to hacking
| from anywhere else on the planet.
|
| > not on a system that has a more general sense of collision-
| avoidance and situational awareness (like humans do).
|
| Are vision systems to that point yet when it comes to driving
| vehicles?
|
| > Because the whole point of using an automated vision system
| is usually that you want to avoid human eyes on the problem.
|
| And the point of hacking an automated system is that it's
| easier to do that remotely than to cause a human to crash
| locally.
| JoshuaDavid wrote:
| > Because you have to physically do it, as opposed to hacking
| from anywhere else on the planet.
|
| My impression is that the adversarial image attacks in
| question involve physically placing a sticker on something
| which will be in the view of self-driving cars -- it's not a
| remote exploit.
| whatever1 wrote:
| Is traditional computer vision less susceptible to these? Since
| the features are human crafted, it sounds to me that the risk
| would be much lower.
| ativzzz wrote:
| I would imagine more because those features are probably easier
| to reverse engineer. Plus traditional CV is weaker in
| generalized scenarios and is pretty easy to trick or throw off.
| [deleted]
| frazbin wrote:
| The really scary thing is that this could be used as an excuse to
| hide production ML models and even the tech used to generate
| them. Sounds like we can expect the state-of-the-art AI
| techniques to be jealously guarded eventually. I guess optimism
| on the ground is enough to have prevented that so far, but once
| the scales tip away from sharing and towards exploitation.. well,
| we know it's largely a one-way process on the 1 decade time
| scale. Is this the chilling effect that will bring us into the
| next AI winter?
| jowday wrote:
| > Sounds like we can expect the state-of-the-art AI techniques
| to be jealously guarded eventually.
|
| This isn't an eventuality, it's the current state of the
| industry.
| frazbin wrote:
| Hm is that really true? I thought that there was quite a lot
| of sharing from industry leaders at the research paper and
| dataset level, and that these could be used imitate
| production systems given some hacking. Kinda seemed like the
| majors were enjoying the benefits of the scrutiny afforded to
| public scientific research, while keeping their monopoly
| confined to the silicon/speed/throughput axis. Hence all the
| free AI software toolkits and also high priced specialty hot-
| off-the-wafer chips you'll never get.
| quirkot wrote:
| The key point I see in this is that, given the current ecosystem,
| attacks are systemic. Plus, given the nature of ML training and
| datasets, it's _expensive_ to bug fix an attack, if it 's even
| possible.
|
| This right here is the real underlying long term danger:
|
| > the most popular CV datasets are so embedded in development
| cycles around the world as to resemble software more than data;
| software that often hasn't been notably updated in years
| meiji163 wrote:
| Language models have the same defect - they are quite brittle and
| susceptible to black-box adversarial attacks ( eg
| arxiv.org/abs/1907.11932 )
| pixelgeek wrote:
| Maybe this is a good example of why ML systems shouldn't be used?
| Ultimately we don't know how the networks that get created
| actually make decisions so doesn't that make protecting them from
| attacks like this impossible?
| bckr wrote:
| I like the cnn-adversarial aesthetic. Psychedelic blobs (sans the
| swirly demon dog faces), flower paintings and vinyl stickers of
| organic figures everywhere!
___________________________________________________________________
(page generated 2021-11-29 23:01 UTC)