[HN Gopher] Nightshade: An offensive tool for artists against AI...
       ___________________________________________________________________
        
       Nightshade: An offensive tool for artists against AI art generators
        
       Author : ink404
       Score  : 182 points
       Date   : 2024-01-19 17:42 UTC (1 days ago)
        
 (HTM) web link (nightshade.cs.uchicago.edu)
 (TXT) w3m dump (nightshade.cs.uchicago.edu)
        
       | ink404 wrote:
       | Paper is here: https://arxiv.org/abs/2310.13828
        
       | KingOfCoders wrote:
       | Artists sitting in art school looking at other artists pictures
       | to learn how to paint in different styles defend against AI
       | learning from their pictures how to paint in different styles.
        
         | visarga wrote:
         | AI is just a tool in someone's hand, there's a human who
         | intends something
        
           | ta8645 wrote:
           | If that's true, then it should be fine for that human to
           | paint with the brush of his AI tool. Why should that human
           | artist be restricted in the types of tools he uses to create
           | his artwork?
        
             | thfuran wrote:
             | Should I be restricted in using copy paste as my tool for
             | creating art?
        
         | NoraCodes wrote:
         | This is a tired argument; whether or not the diffusion models
         | are "learning", they are a tool of capital to fuck over human
         | artists, and should be resisted for that reason alone.
        
           | persnickety wrote:
           | As a representative of a lot of things but hardly any capital
           | who uses diffusion models to get something I would otherwise
           | not pay a human artist for anyway, I testify that, the models
           | are not exclusively what you describe them to be.
           | 
           | I do not support indiscriminate banning of anything and
           | everything that can potentially be used to fuck someone over.
        
             | NoraCodes wrote:
             | I did not say they were _exclusively_ that; I said they
             | _were_ that.
             | 
             | Once we as a society have implemented a good way for the
             | artists whose work powers these machines to survive, you
             | can feel good about using them. Until then, frankly, you're
             | doing something immoral by paying to use them.
        
               | akx wrote:
               | What if I run Stable Diffusion locally without paying
               | anyone anything? Is it less immoral?
        
               | NoraCodes wrote:
               | Marginally, yeah, since you're not supporting the
               | development of more capable labor-saving devices in this
               | category.
               | 
               | I'm still not a fan, though.
        
               | riversflow wrote:
               | How is empowering others to create not a moral good?
        
               | Filligree wrote:
               | Who pays to use generators? The open ones are way more
               | capable and interesting, generally.
        
               | wokwokwok wrote:
               | Is that really your concern?
               | 
               | Whether you pay for it?
               | 
               | Let's put it this way: paying for or not paying for
               | stolen goods. Does it make any difference?
               | 
               | Why is that remotely relevant?
               | 
               | You want to argue "are the good stolen?" Sure. That's a
               | discussion we can have.
               | 
               | Did you pay for them or not? Who cares?
        
               | jsheard wrote:
               | Isn't high-quality open image generation almost entirely
               | dependent on Stability releasing their foundational
               | models for free, at great expense to them?
               | 
               | That's not something you'll be able to rely on long-term,
               | there won't always be a firehose of venture capital money
               | to subsidise that kind of charity.
        
               | Filligree wrote:
               | The cost of training them is going down, though. Given
               | the existence of models like Pixart, I don't think we'll
               | stay dependent on corporate charity for long.
        
               | Levitz wrote:
               | By this logic we ought to start lynching artists, why
               | they didn't care about all of those who lost their jobs
               | making pigments, canvasses, pencils, brushes etc etc
        
               | hirsin wrote:
               | Artists pay those people and make their jobs needed. Same
               | as the person above claiming Duchamp didn't negotiate
               | with the ceramics makers - yes, they absolutely did and
               | do pay their suppliers. Artists aren't smash and grabbing
               | their local Blick.
               | 
               | AI pays no artist.
        
               | CaptainFever wrote:
               | Not digital artists, though.
        
           | SonicSoul wrote:
           | great comment!
           | 
           | imagine being a photographer that takes decades to perfect
           | their craft. sure another student can study and mimic your
           | style. but it's still different than some computer model
           | "ingesting" vast amount of photos and vomiting something
           | similar for $5.99 in aws cpu cost so that some prompt jockey
           | can call themselves an AI artist and make money off of other
           | peoples talent.
           | 
           | i get that this is cynical and does not encompass all ai art,
           | but why not let computers develop their own style wihout
           | ingesting human art? that's when it would actually be AI art
        
             | wincy wrote:
             | Like 99.9% of the art the common people care about is Darth
             | Vader and Taylor Swift and other pop culture stuff like
             | that.
             | 
             | These people literally don't care what your definition of
             | what is and isn't art is, or how it's made, they just want
             | a lock screen wallpaper of themselves fighting against
             | Thanos on top of a volcano.
             | 
             | The argument of "what is art" has been an academic
             | conversation largely ignored by the people actually
             | consuming the art for hundreds of years. Photography was
             | just pop culture trash, comics were pop culture trash,
             | stick figure web comics were pop culture trash. Today's pop
             | culture trash is the "prompt jockey".
             | 
             | I make probably 5-10 pictures every day over the course of
             | maybe 20 minutes as jokes on Teams because we have Bing
             | Chat Enterprise. My coworkers seem to enjoy it. Nobody
             | cares that it's generated. I'm also not trying to be an
             | "artist" whatever that means. It just is, and it's fun. I
             | wasn't gonna hire an artist to draw me pictures to shitpost
             | to my coworkers. It's instead unlocked a new fun way to
             | communicate.
        
               | SonicSoul wrote:
               | not entirely sure what your point is, but i think you are
               | saying that art is just a commodity we use for cheap
               | entertainment so it's ok for computers to do the same?
               | 
               | in the context of what i was saying the definition of
               | what is art can be summed up as anything made by humans.
               | i have no problem when its used in memes and being open
               | sourced etc.. the issue i have is when a human invests
               | real time into it and then its taken and regurgitated
               | without their permission. do you see that distinction?
        
             | Levitz wrote:
             | Because that's not what happens, ever. You wouldn't ask a
             | human to have their style of photographing when they don't
             | know what a photograph even looks like.
        
             | bongodongobob wrote:
             | That's a funny argument because artists lost their shit
             | over photography too. Now anyone can make a portrait!
             | Photography will kill art!
             | 
             | Art is the biggest gate kept industry there is and I detest
             | artists who believe only they are the chosen one.
             | 
             | Art is human expression. We all have a right to create what
             | we want with whatever tools we want. They can adapt or be
             | left behind. No sympathy from me.
        
           | witherk wrote:
           | "Cameras are a tool of captial to fuck over human portrait
           | artists"
           | 
           | It's funny that these people use the langauge of communism,
           | but apparently see artwork as purley an economic activity.
        
             | NoraCodes wrote:
             | That's an intentional misinterpretation, I think. I mention
             | art as an economic activity because it's primarily
             | professional artists that are harmed by the widespread
             | adoption of this technology.
        
             | indigo0086 wrote:
             | They tried to use the labor theory early on by claiming,
             | "real art takes hard work and time as opposed to the
             | miniscule cpu hours computers use to make 'AI art". The
             | worst thing AI brings to the table is amplifying these
             | types of sentiments to control industry in their favor
             | where they would otherwise be unheard and relegated to
             | Instagram likes
        
           | educaysean wrote:
           | As a human artist I don't feel the same as you, and I somehow
           | doubt that you care all that much about what we think
           | anyways. You already made up your mind about the tech, so
           | don't feel the need to protect us from "a tool of capital
           | [sic]" to fortify your argument.
        
             | NoraCodes wrote:
             | My opinion is based on my interactions with my friends who
             | are artists. I admit freely to caring less about what
             | people I don't know say, in the absence of additional
             | evidence.
        
             | tester457 wrote:
             | Among working human artists your opinion is in the
             | minority. Most professionals are not a fan of this.
        
               | bongodongobob wrote:
               | Yeah because their wage is inflated. Photographers were
               | mad about digital cameras too. Womp womp.
        
               | tester457 wrote:
               | Inflated is not an apt descriptor of artist wages, those
               | are known to be low.
        
               | bongodongobob wrote:
               | If you're independent selling paintings, sure. Designing
               | packaging or something commercial? 4 hours of work a week
               | for nearly 6 figures. I know a couple graphic designers
               | and they don't do shit for what they're paid.
        
               | MacsHeadroom wrote:
               | And horse wages (some oats) were low when the car was
               | invented. Yet they were still inflated. There used to be
               | more horses than humans in this country. Couldn't even
               | earn their keep when the Ford Model T came along.
        
             | big_whack wrote:
             | Do you make your living as an artist?
        
           | matheusmoreira wrote:
           | > they are a tool of capital to fuck over human artists
           | 
           | So are the copyright and intellectual property laws that
           | artists rely on. From my perspective, _you_ are the capital
           | and _I_ am the one being fucked. So are you ready to abolish
           | all that?
        
             | CaptainFever wrote:
             | Right. This new outrage is just the copyright owners
             | realising that their power is not safe. Where is the
             | outrage when self checkouts happened?
        
               | matheusmoreira wrote:
               | Copyright owners indeed. That's what these artists are.
               | They're copyright owners. Monopolists. They _are_ the
               | capital. Capitalism is all about owning property.
               | Copyright is _intellectual_ property. Literally imaginary
               | property. Ownership of information, of bits, of
               | _numbers_. These artists are the literal epitome of
               | capitalism. They enjoy state granted monopolies that last
               | multiple human lifetimes. We 'll be long dead before
               | their works enter the public domain. They _want_ it to be
               | this way. They _want_ eternal rent seeking for themselves
               | and their descendants. At least one artist has told me
               | exactly that in discussions here on HN. They think it 's
               | fair.
               | 
               | They are the quintessential representation of capital.
               | And they come here to ask us to "resist" the other forms
               | of capital on principle.
               | 
               | I'm sorry but... No. I'm gonna resist them instead. It's
               | my sincere hope that this AI technology hammers in the
               | last nails on the coffin of copyright and intellectual
               | property as a whole. I want all the models to leak so
               | that it becomes literally impossible to get rid of this
               | technology no matter how much they hate it. I want it to
               | progress so that we can run it on our own machines, so
               | that it'll be so ubiquitous it can't be censored or
               | banned no matter how much they lobby for it.
        
               | NoraCodes wrote:
               | > It's my sincere hope that this AI technology hammers in
               | the last nails on the coffin of copyright and
               | intellectual property as a whole.
               | 
               | If it does, I will give you one thousand United States
               | dollars, and you can quote me on thst whenever you like.
               | 
               | More likely, big companies will retain control as they
               | always have (via expensive lawyers), and individual
               | artists will get screwed.
        
         | Snow_Falls wrote:
         | These AIs are not people. They do not learn.
        
           | jfdbcv wrote:
           | Define learn.
        
             | dudeinjapan wrote:
             | Define people.
        
         | ozten wrote:
         | The memetic weapons humans unleashed on other humans at art
         | school to deter copying are brutal. Just wait until critique.
        
           | dist-epoch wrote:
           | "Sorry, this is not art, is AI generated trash."
        
         | bluejekyll wrote:
         | My issue with this line of argument is that it's
         | anthropomorphizing machines. It's fine to compare how humans do
         | a task with how a machine does a task, but in the end they are
         | very different from each other, organic vs hardware and
         | software logic.
         | 
         | First, you need to prove that generative AI works fundamentally
         | the same way as humans at the task of learning. Next you have
         | to prove that it recalls information in the same way as humans.
         | I don't think anyone would say these are things that we can
         | prove, let alone believe they do. So what we get is comments
         | like they are similar.
         | 
         | What this means, is these systems will fall into different
         | categories of law around copyright and free-use. What's clear
         | is that there are people who believe that they are harmed by
         | the use of their work in training these systems and it
         | reproducing that work in some manner later on (the degree to
         | which that single work or the corpus of their work influences
         | that final product is an interesting question). If your terms
         | of use/copyright/license says "you may not train on this data",
         | then should that be protected in law? If a system like
         | nightshade can effectively influence a training model enough to
         | make it clear that something protected was used in its
         | training, is that enough proof that the legal protections were
         | broken?
        
           | thfuran wrote:
           | >First, you need to prove that generative AI works
           | fundamentally the same way as humans at the task of learning.
           | Next you have to prove that it recalls information in the
           | same way as humans.
           | 
           | No, you don't need to prove any of those things. They're
           | irrelevant. You'd need to prove that the AI is itself morally
           | (or, depending on the nature of the dispute, legally)
           | equivalent to a human and therefore deserving of (or entitled
           | to) the same rights and protections as a human. Since it is
           | pretty indisputably the case that software is not currently
           | legally equivalent to a human, you're stuck with the moral
           | argument that it ought to be, but I think we're very far from
           | a point where that position is warranted or likely to see
           | much support.
        
             | kelseyfrog wrote:
             | You don't even need to do that. Art is an act of
             | ontological framing.
             | 
             | Duchamp didn't need negotiate with the ceramic makers to
             | make the Fountain into art.
        
             | stale2002 wrote:
             | > ou'd need to prove that the AI is itself morally (or,
             | depending on the nature of the dispute, legally) equivalent
             | to a human and therefore deserving of
             | 
             | No you don't.
             | 
             | A human using a computer to make art doesn't automatically
             | lose their fair use rights as a human.
             | 
             | > indisputably the case that software is not currently
             | legally equivalent to a human
             | 
             | Fortunately it is the human who uses the computer who has
             | the legal rights to use computers in their existing process
             | of fair use.
             | 
             | Human brains or giving rights to computers has absolutely
             | nothing to do with the rights of human to use a camera, use
             | photoshop, or even use AI, on a computer.
        
             | sebzim4500 wrote:
             | Few people are claiming that the AI itself has the same
             | rights as a human. They are arguing that a human with an AI
             | has the same rights as a human who doesn't have an AI.
        
           | huytersd wrote:
           | Why do you have to prove that? There is no replication
           | (except in very rare cases), how someone draws a line should
           | not be copyrightable.
        
             | pfist wrote:
             | Therein lies the crux of the issue: AI is not "someone". We
             | need to approach this without anthropomorphizing the AI.
        
               | Almondsetat wrote:
               | You are right, AI is nothing but a tool akin to a pen or
               | a brush.
               | 
               | If you draw Mickey Mouse with a pencil and you publish
               | (and sell) the drawing who is getting the blame? Is the
               | pencil infringing the copyright? No, it's you.
               | 
               | Same with AI. There is nothijg wrong with using
               | copyrighted works to train an algorithm, but if you
               | generate an image and it contains copyrighted materials
               | you are getting sued.
        
               | ufocia wrote:
               | But there is. You are arguably making unauthorized copies
               | to train.
        
               | Almondsetat wrote:
               | Unauthorized copies? If the images are published on the
               | internet how is it downloading them "unauthorized"?
        
               | __loam wrote:
               | Publicly available doesn't mean you have a license to do
               | whatever you like with the image. If I download an image
               | and re-upload it to my own art station or sell prints of
               | it, that is something I can physically do because the
               | image is public, but I'm absolutely violating copyright.
        
               | Almondsetat wrote:
               | That's not an unautharized copy, it's unauthorized
               | distribution. By the same metric me seeing the image and
               | copying it by hand is also unauthorized copy (or
               | reproduction is you will)
        
               | xigoi wrote:
               | IANAL, but I'm pretty sure copying an image by hand is
               | copyright violation.
        
               | Almondsetat wrote:
               | So you cannot train your drawing skills by copying other
               | people's artworks?
        
               | xigoi wrote:
               | You can do it in private, but you can't distribute the
               | resulting image, let alone sell it.
        
               | Almondsetat wrote:
               | Then I don't really understand your original reply.
               | Simply copying a publicly available image doesn't
               | infringe anything (unless it was supposed to be
               | private/secret). Doing stuff with that image in private
               | still doesn't constitute infringement. Distribution does,
               | but that wasn't the topic at hand
        
               | xigoi wrote:
               | You can train a neural network in private too and nobody
               | will have a problem with that. The topic of discussion is
               | commercial AI.
        
               | huytersd wrote:
               | If you are viewing the image on your browser on a
               | website, you are making a local copy. That's not
               | unauthorized.
        
               | pixl97 wrote:
               | Companies aren't someone, yet in the US we seem to give
               | them rights of someone.
        
               | ufocia wrote:
               | They are someone in the eyes of the law. They just have a
               | different set of rights.
        
           | echelon wrote:
           | We are machines. We just haven't evenly accepted it yet.
           | 
           | Our biology is mechanical, and lay people don't possess an
           | intuition about this. Unless you've studied molecular biology
           | and biochemistry, it's not something that you can easily
           | grasp.
           | 
           | Our inventions are mechanical, too, and they're reaching
           | increasing levels of sophistication. At some point we'll meet
           | in the middle.
        
             | DennisAleynikov wrote:
             | 100% this. Labor and all these other concepts are outdated
             | ways to interpret reality
             | 
             | Humans are themselves mechanical so at the end of the day
             | none of these issues actually matter
        
           | Phiwise_ wrote:
           | The first perceptron was explicitly designed to be a
           | trainable visual pattern encoder. Zero assumptions about
           | potential feelings of the ghost in the machine need to be
           | made to conclude the program is probably doing what humans
           | studying art _say they assume_ is happening in their head
           | when you show both of them a series of previous artists '
           | works. This argument is such a tired misdirection.
        
           | michaelmrose wrote:
           | What we actually need to prove is whether such technology is
           | a net benefit to society all else is essentially hand waving.
           | There is no natural right to poorly named intellectual
           | property and even if there was such a matter would never be
           | decided based on the outcome of a philosophical argument
           | because we don't decide anything that way.
        
             | withinboredom wrote:
             | > such technology is a net benefit to society all else is
             | essentially hand waving
             | 
             | Some might have said this about cars ... yet, here we are.
             | Cars are definitely the opposite, except for longer-
             | distance travel.
        
             | tqi wrote:
             | How do you measure "benefit", and what does "net" actually
             | mean?
        
           | usrbinbash wrote:
           | > that it's anthropomorphizing machines.
           | 
           | No, it's not. It's merely pointing out the similarity between
           | the process of training artists (by ingesting publicly
           | available works) and ML models (which ingest publicly
           | available works).
           | 
           | > First, you need to prove that generative AI works
           | fundamentally the same way as humans at the task of learning.
           | 
           | Given that there is no comprehensive model for how humans
           | actually learn things, that would be an unfeasible
           | requirement.
        
             | __loam wrote:
             | What a reductive way to describe learning art. The
             | similarities are merely surface level.
             | 
             | > Given that there is no comprehensive model for how humans
             | actually learn things, that would be an unfeasible
             | requirement.
             | 
             | That is precisely why we should not be making this
             | comparison.
        
           | jwells89 wrote:
           | The way these ML models and humans operate are indeed quite
           | different.
           | 
           | Humans work by abstracting concepts in what they see, even
           | when looking at the work of others. Even individuals with
           | photographic memories mentally abstract things like lighting,
           | body kinetics, musculature, color theory, etc and produce new
           | work based on those abstractions rather than directly copying
           | original work (unless the artist is intentionally
           | plagiarizing). As a result, all new works produced by humans
           | will have a certain degree of originality to them, regardless
           | of influences due to differences in perception, mental
           | abstraction processes, and life experiences among other
           | factors. Furthermore, humans can produce art without any
           | external instruction or input... give a 5 year old that's
           | never been exposed to art and hasn't been shown how to make
           | art a box of crayons and it's a matter of time before they
           | start drawing.
           | 
           | ML models are closer to highly advanced collage makers that
           | take known images and blend them together in a way that's
           | convincing at first glance, which is why it's not uncommon to
           | see elements lifted directly from training data in the images
           | they produce. They do not abstract the same way and by
           | definition cannot produce anything that's not a blend of
           | training data. Give them no data and they cannot produce
           | anything.
           | 
           | It's absolutely erroneous to compare them to humans, and I
           | believe it will continue to be so until ML models evolve into
           | something closer to AGI which can e.g. produce stylized work
           | with nothing but photographic input that it's gathered in a
           | robot body and artistic experimentation.
        
             | shlubbert wrote:
             | Beautifully put. I wish this nuance was more widely
             | understood in the current AI debate.
        
             | l33tman wrote:
             | You're wrong in your concept of how AI/ML works. Even
             | trivial 1980's neural networks generalize, it's the whole
             | point of AI/ML or you'd just have a lookup-table (or, as
             | you put it, something that copies and pastes images
             | together).
             | 
             | I've seen "infographics" spread by anti-AI people (or just
             | attention-seekers) on Twitter that tries to "explain" that
             | AI image generators blend together existing images, which
             | is simply not true..
             | 
             | It is however the case that different AI models (and the
             | brain) generalize a bit differently. That is probably the
             | case between different humans too. Not the least with for
             | example like you say those with photographic memory,
             | autists etc.
             | 
             | What you call creativity in humans is just noise in
             | combination with a boatload of exposure to multi-modal
             | training data. Both aspects are already in the modern
             | diffusion models. I would however ascribe a big edge in
             | humans to what you normally call "the creative process"
             | which can be much richer, like a process where you figure
             | out what you lack to produce a work, go out and learn
             | something new and specific, talk with your peers, listen to
             | more noise.. stuff like that seems (currently) more
             | difficult for AIs, though I guess plugins that do more
             | iterative stuff like chatgpt's new plugins will appear in
             | media generators as well eventually..
        
               | jwells89 wrote:
               | ML generalization and human abstraction are very
               | different beasts.
               | 
               | For example, a human artist would have an understanding
               | of how line weight factors into stylization and _why_ it
               | looks the way it does and be able to accurately apply
               | these concepts to drawings of things they've never seen
               | in that style (or even seen at all, if it's of something
               | imaginary).
               | 
               | The best an ML model can do is mimic examples of line art
               | in the given style within its training data, the product
               | of which will contain errors due to not understanding the
               | underlying principles, especially if you ask it to draw
               | something it hasn't seen in the style you're asking for.
               | This is why generative AI needs such vast volumes of data
               | to work well; it's going to falter in cases not well
               | covered by the data. It's not learning concepts, only
               | statistical probabilities.
        
           | stale2002 wrote:
           | > What this means, is these systems will fall into different
           | categories of law around copyright and free-use.
           | 
           | No they won't.
           | 
           | A human who uses a computer as a tool (under all the previous
           | qualifications of fair use) is still a human doing something
           | in fair use.
           | 
           | Adding a computer to the workflow of a human doesn't make
           | fair use disappear.
           | 
           | A human can use photoshop, in fair use. They can use a
           | camera. They can use all sorts of machines.
           | 
           | The fact that photoshop is not the same as a human brain is
           | simply a completely unrelated non sequitur. Same applies to
           | AI.
           | 
           | And all the legal protections that are offered to someone who
           | uses a regular computer, to use photoshop in fair use, are
           | also extended to someone who uses AI in fair use.
        
             | __loam wrote:
             | Yet the copyright office has already stated that getting an
             | AI to create an image for you does not have sufficient
             | human authorship to be copyrighted. There's already a legal
             | distinction here between this "tool" and tools like
             | photoshop and cameras.
             | 
             | It's also presumptive to assume that AI tools have these
             | fair use protections when none of this has actually been
             | decided in a court of law yet. There's still several
             | unsettled cases here.
        
           | z7 wrote:
           | >My issue with this line of argument is that it's
           | anthropomorphizing machines. It's fine to compare how humans
           | do a task with how a machine does a task, but in the end they
           | are very different from each other, organic vs hardware and
           | software logic.
           | 
           | There's an entire branch of philosophy that calls these
           | assumptions into question:
           | 
           | https://en.wikipedia.org/wiki/Posthumanism
           | 
           | https://en.wikipedia.org/wiki/Antihumanism
           | 
           | >Martin Heidegger viewed humanism as a metaphysical
           | philosophy that ascribes to humanity a universal essence and
           | privileges it above all other forms of existence. For
           | Heidegger, humanism takes consciousness as the paradigm of
           | philosophy, leading it to a subjectivism and idealism that
           | must be avoided.
           | 
           | >Processes of technological and non-technological
           | posthumanization both tend to result in a partial "de-
           | anthropocentrization" of human society, as its circle of
           | membership is expanded to include other types of entities and
           | the position of human beings is decentered. A common theme of
           | posthumanist study is the way in which processes of
           | posthumanization challenge or blur simple binaries, such as
           | those of "human versus non-human", "natural versus
           | artificial", "alive versus non-alive", and "biological versus
           | mechanical".
        
         | deadbeeves wrote:
         | And? Even if neural networks learn the same way humans do, this
         | is not an argument against taking measures against one's art
         | being used as training data, since there are different
         | implications if a human learns to paint the same way as another
         | human vs. if an AI learns to paint the same way as a human. If
         | the two were _exactly_ indistinguishable in their effects no
         | one would care about AIs, not even researchers.
        
           | MichaelZuo wrote:
           | But the 'different implications' only exist in the heads of
           | said artists?
           | 
           | EDIT: removed a part.
        
             | deadbeeves wrote:
             | I'm not sure what you mean when you say different
             | implications existing is subjective, since they clearly
             | aren't, but regardless of who has more say in general
             | terms, the author of a work can decide how to publish it,
             | and no one has more say than them on that subject.
        
               | MichaelZuo wrote:
               | What are you saying?
               | 
               | Of course it's subjective, e.g. 3 million years ago there
               | were no 'different implications' whatsoever, of any kind,
               | because there were no humans around to have thoughts like
               | that.
        
               | deadbeeves wrote:
               | I'm using "implication" as a synonym of "effect". If a
               | human learns to imitate your style, that human can make
               | at most a handful of drawings in a single day. The only
               | way for the rate of output to increase is for more humans
               | to learn to imitate it. If an AI learns to imitate your
               | style, the AI can be trivially copied to any number of
               | computers and the maximum output rate is unbounded.
               | Whether this is good or bad is subjective, but this
               | difference in consequences is objective, and someone
               | could be entirely justified in seeking to impede it.
        
               | MichaelZuo wrote:
               | Ah okay, I get your meaning now, I'll edit my original
               | comment too.
               | 
               | Though we already have an established precedent in-
               | between, that of Photoshop allowing artists to be,
               | easily, 10x faster then the best painters previously.
               | 
               | i.e. Right now 'AI' artistry could be considered a turbo-
               | Photoshop.
        
               | deadbeeves wrote:
               | Tool improvements only apply a constant factor to the
               | effectiveness of learning. Creating a generative model
               | applies an _unbounded_ factor to the effectiveness of
               | learning because, as I said, the only limit is how much
               | computing resources are available to humanity. If a
               | single person was able to copy themselves at practically
               | no cost and the copy retained all the knowledge of the
               | original then the two situations would be equivalent, but
               | that 's impossible. Having n people with the same skill
               | multiplies the cost of learning by n. Having n instances
               | of an AI with the same skill multiplies the cost of
               | learning by 1.
        
               | MichaelZuo wrote:
               | Right, but the 'unbounded factor' is irrelevant because
               | the output will quickly trend into random noise.
               | 
               | And only the most interesting top few million art pieces
               | will actually attract the attention of any concrete
               | individual.
               | 
               | For a current example, there's already billions of man-
               | hours worth of AI spam writing, indexed by Google, that
               | is likely not actually read by even a single person on
               | Earth.
        
               | deadbeeves wrote:
               | Whether it's irrelevant is a matter of opinion. The fact
               | remains that a machine being able to copy the artistic
               | style of a human makes it so that anyone can produce
               | output in the style of that human by just feeding the
               | machine electricity. That inherently devalues the style
               | the artist has painstakingly developed. If someone wants
               | a piece of art in that artist's style they don't have to
               | go to that artist, they just need to request the machine
               | for what they want. Is the machine's output of low
               | quality? Maybe. Will there be people for whom that low
               | quality still makes them want to seek out the human? No
               | doubt. It doesn't change the fact that the style is still
               | devalued, nor that there exist artists who would want to
               | prevent that.
        
               | MichaelZuo wrote:
               | > Whether it's irrelevant is a matter of opinion.
               | 
               | It's just as much of an opinion, or as 'objective', as
               | your prior statements.
               | 
               | Your going to have to face up to the fact that just
               | saying something is 'objective' doesn't necessarily mean
               | all 8 billion people will agree that it is so.
        
           | withinboredom wrote:
           | And yet, some people don't even want their artwork studied in
           | schools. Even if you argue that an AI is "human enough" the
           | artists should still have the right to refuse their art being
           | studies.
        
             | deeviant wrote:
             | > the artists should still have the right to refuse their
             | art being studies.
             | 
             | Why? That certainly isn't a right spelled out in either
             | patents or copyrights, both of which are supposed to
             | _support_ the development of arts and technology, not
             | hinder it.
             | 
             | If I discover a new mathematical formula, musical scale, or
             | whatnot, should I be able to prevent others from learning
             | about it?
        
               | withinboredom wrote:
               | It's called a license and you can make it almost
               | anything. It doesn't even need to be spelled out, it can
               | be verbal: "no, I won't let you have it"
               | 
               | It's yours. That's literally what copyright is there to
               | enforce.
        
               | CaptainFever wrote:
               | License doesn't matter if fair use applies.
               | 
               | > Fair use allows reproduction and other uses of
               | copyrighted works - without requiring permission from the
               | copyright owner - under certain conditions. In many
               | cases, you can use copyrighted materials for purposes
               | such as criticism, comment, news reporting, teaching
               | (including multiple copies for classroom use),
               | scholarship or research.
               | 
               | Reminder that you can't own ideas, no matter what the law
               | says.
               | 
               | NOTE: This comment is copyrighted and provided to you
               | under license only. By reading this comment, you agree to
               | give me 5 billion dollars.
        
               | withinboredom wrote:
               | I'd love to see you try to enforce that license because
               | it would only prove my point. You'd have to sue me; then
               | I would point to the terms of service of this platform
               | and point out that by using it, you have no license here.
               | 
               | Fair use though, only applies as a legal defense because
               | someone asserts you stole their work. Then ONLY the court
               | decides whether or not you used it under fair use. You
               | don't get to make that decision; you just get to decide
               | whether to try and use it as a defense.
               | 
               | Even if you actually did unfairly use copyrighted works,
               | you would be stupid not to use that as a defense. Because
               | maybe somebody on the jury agrees with you...
        
             | dehrmann wrote:
             | > And yet, some people don't even want their artwork
             | studied in schools.
             | 
             | You can either make it for yourself and keep it for
             | yourself or you can put it out into the world for all to
             | see, criticize, study, imitate, and admire.
        
               | Barrin92 wrote:
               | that's not how licensing work, be it art, software or
               | just about anything else. We have some pretty well
               | defined and differentiated rules what you can and cannot
               | do, in particular commercially or in public, with someone
               | else's work. If you go and study a work of fiction in a
               | college class, unless that material is in the public
               | domain, you're gonna have to pay for your copy, you want
               | to broadcast a movie in public, you're going to have to
               | pay the rightsholder.
        
               | stale2002 wrote:
               | > If you go and study a work of fiction in a college
               | class, unless that material is in the public domain,
               | you're gonna have to pay for your copy,
               | 
               | No you wont!
               | 
               | It is only someone who distributes copies who can get in
               | trouble.
               | 
               | If instead of that you as an individual decide to study a
               | piece of art or fiction, and you do no distribute copies
               | of it to anyone, this is completely legal and you don't
               | have to pay anyone for it.
               | 
               | In addition to that, fair use protections apply
               | regardless of what the creative works creator wants.
        
               | withinboredom wrote:
               | Making a profit off variations of someone's work isn't
               | covered under fair use.
        
               | stale2002 wrote:
               | Gotcha.
               | 
               | I wasn't talking about someone creating and selling
               | copies of someone else's work, fortunately.
               | 
               | So my point stands and your completely is in agreement
               | with me that people are allowed to learn from other
               | people's works. If someone wants to learn from someone
               | else's work, that is completely legal no matter the
               | licensing terms.
               | 
               | Instead, it is only distributing copies that is not
               | allowed.
        
               | withinboredom wrote:
               | AI isn't a human. It isn't "learning"; instead, it's
               | encoding data so that it may be reproduced in combination
               | with other things it has encoded.
               | 
               | If I paint a painting in the style of Monet, then I would
               | give that person attribution by stating that. Monet may
               | have never painted my artwork, but it's still based on
               | that person's work. If I paint anything, I can usually
               | point to everything that inspired me to do so. AI can't
               | do that (yet) and thus has no idea what it is doing. It
               | is a printer that prints random parts of people's works
               | with no attribution. And finally, it is distributing them
               | to it's owner's customers.
               | 
               | I actually hope that true AI comes to fruition at some
               | point; when that happens I would be arguing the exact
               | opposite. We don't have that yet, so this is just
               | literally printing variations of other people's work.
               | Don't believe me, try running an AI without training it
               | on other people's work!
        
               | dehrmann wrote:
               | Right, but there's also fair use, and every use I
               | mentioned could plausibly fall under that.
        
               | withinboredom wrote:
               | There's no such thing as fair use until you get to court
               | (as a legal defense). Then, the court decides whether it
               | is fair use or not. They may or may not agree with you.
               | Only a court can determine what constitutes fair use (at
               | least in the US).
               | 
               | So, if you are doing something and asserting "fair use,"
               | you are literally asking for someone to challenge you and
               | prove it is not fair use.
        
               | stale2002 wrote:
               | > There's no such thing as fair use until you get to
               | court (as a legal defense)
               | 
               | Well the point is that it wouldn't go to court, as it
               | would be completely legal.
               | 
               | So yes, if nobody sues you, then you are completely in
               | the clear and aren't in trouble.
               | 
               | Thats what people mean by fair use. They mean that nobody
               | is going to sue you, because the other person would lose
               | the lawsuit, therefore your actions are safe and legal.
               | 
               | > you are literally asking for someone to challenge you
               | and prove it is not fair use.
               | 
               | No, instead of that, the most likely circumstance is that
               | nobody sues you, and you aren't in trouble at all, and
               | therefore you did nothing wrong and are safe.
        
               | withinboredom wrote:
               | > as it would be completely legal.
               | 
               | Theft is never legal; that's why you can be sued. "Fair
               | use" is a legal defense in the theft of copyrighted
               | works.
               | 
               | > They mean that nobody is going to sue you, because the
               | other person would lose the lawsuit
               | 
               | That hasn't stopped people from suing anyone ever. If
               | they want to sue you, they'll sue you.
               | 
               | > and therefore you did nothing wrong and are safe.
               | 
               | If you steal a pen from a store, it's still theft even if
               | nobody catches you; or cares.
        
               | sgift wrote:
               | > Theft is never legal; that's why you can be sued.
               | 
               | That's incorrect. You can be sued for anything. If it
               | _is_ theft or something else or nothing is decided by the
               | courts.
        
               | withinboredom wrote:
               | That is entirely my point. It can only be decided by the
               | courts. This being a civil matter, it has to be brought
               | up by a lawsuit. Thus, you have to be sued and it has to
               | be decided by the courts.
        
             | deadbeeves wrote:
             | >the artists should still have the right to refuse their
             | art being studies.
             | 
             | No, that right doesn't exist. If you put your work of art
             | out there for people to see, people will see it and learn
             | from it, and be inspired by it. It's unavoidable. How could
             | it possibly work otherwise?
             | 
             | Artist A: You studied my work to produce yours, even when I
             | asked people not to do that!
             | 
             | Artist B: Prove it.
             | 
             | What kind of evidence or argument could Artist A possibly
             | provide to show that Artist B did what they're accusing
             | them of, without being privy to the internal state of their
             | mind. You're not talking about plagiarism; that's
             | comparatively easy to prove. You're asking about merely
             | _studying_ the work.
        
               | withinboredom wrote:
               | The right to not use my things exists everywhere,
               | universally. Good people usually ask before they use
               | something of someone else's, and the person being asked
               | can say "no." How hard is that to understand? You might
               | believe they don't have the right to say "no," but they
               | can say whatever they want.
               | 
               | Example:
               | 
               | If you studied my (we will assume "unique") work and used
               | it without my permission, then let us say I sue you. At
               | that point, you would claim "fair use," and the courts
               | would decide whether it was fair use (ask everyone who
               | used a mouse and got sued for it in the last ~100 years).
               | The court would either agree that you used my works under
               | "fair use" ... or not. It would be up to how you
               | presented it to the court, and humans would analyze your
               | intent and decide.
               | 
               | OR, I might agree it is fair use and not sue you.
               | However, that weakens my standing on my copyright, so
               | it's better for me to sue you (assuming I have the
               | resources to do so when it is clearly fair use).
        
               | deadbeeves wrote:
               | >You might believe they don't have the right to say "no,"
               | but they can say whatever they want.
               | 
               | You have a right to say anything you want. Others aren't
               | obligated do as you say just because you say it.
               | 
               | >If you studied my (we will assume "unique") techniques
               | and used them without my permission, then let us say I
               | sue you. At that point, you would claim "fair use,"
               | 
               | On what grounds would you sue me? You think my defense
               | would be "fair use", so you must think my copying your
               | style constitutes copyright infringement, and so you'd
               | sue me for that. Well, no, I would not say "fair use",
               | I'd say "artistic style is not copyrightable; copyright
               | pertains to works, not to styles". There's even
               | jurisprudence backing me up in the US. Apple tried to use
               | Microsoft for copying the look-and-feel of their OS, and
               | it was ruled to be non-copyrightable. Even if was so good
               | that I was able to trick anyone into thinking that my
               | painting of a dog carrying a tennis ball in his mouth was
               | your work, if you've never painted anything like that you
               | would have no grounds to sue me for copyright
               | infringement.
               | 
               | Now, usually in the artistic world it's considered poor
               | manners to outright _copy_ another artist 's style, but
               | if we're talking about rights and law, I'm sorry to say
               | you're just wrong. And if we're talking about merely
               | _studying_ someone 's work without copying it, that's not
               | even frowned upon. Like I said, it's unavoidable. I don't
               | know where you got this idea that anyone has the right to
               | or is even capable of preventing this (beyond simply
               | never showing it to anyone).
        
               | withinboredom wrote:
               | > Others aren't obligated do as you say just because you
               | say it.
               | 
               | Yeah, that's exactly why you'd get sued for copyright
               | theft.
               | 
               | > you must think my copying your style constitutes
               | copyright infringement
               | 
               | Autocorrect screwed that wording up. I've fixed it.
        
               | deadbeeves wrote:
               | I'm not sure what you've changed, but I'll reiterate: my
               | copying your style is not fair use. Fair use applies to
               | copyrighted things. A style cannot be copyrighted, so if
               | you tried to sue me for infringing upon the copyright of
               | your artistic style, your case would be dismissed. It
               | would be as invalid as you trying to sue me for
               | distributing illegal copies of someone else's painting.
               | Legally you have as much ownership of your artistic style
               | as of that other person's painting.
        
         | dorkwood wrote:
         | Is it strange to you that cars and pedestrians are both subject
         | to different rules? They both utilise friction and gravity to
         | travel along the ground. I'm curious if you see a difference
         | between them, and if you could describe what it is.
        
           | ta8645 wrote:
           | Both cars and pedestrians can be videotaped in public,
           | without asking for their explicit permission. That video can
           | be manipulated by a computer to produce an artwork that is
           | then put on public display. No compensation need be offered
           | to anyone.
        
             | estebank wrote:
             | > Both cars and pedestrians can be videotaped in public,
             | without asking for their explicit permission.
             | 
             | This is not universally true. Legislation is different from
             | place to place.
        
               | ta8645 wrote:
               | Hardly the point. The same can be said for road rules
               | between vehicles and pedestrians, for example in major
               | Indian cities, it's pretty much a free-for-all.
        
               | estebank wrote:
               | My point is that in a lot of places in the US you can
               | point a video camera at the street and record. In
               | Germany, you can't. The law in some locales makes a
               | distinction between manual recording (writing or drawing
               | your surroundings) and mechanized recording
               | (photographing or filming). Scalability of an action is
               | taken into consideration on whether something is ok to do
               | or not.
        
               | ta8645 wrote:
               | That has no bearing at all on the issue at hand. The same
               | can be said of the original argument that started this
               | thread.
        
               | thfuran wrote:
               | You think scalability isn't relevant to the difference
               | between a person doing something by hand or with software
               | operating on the entire internet?
        
         | krapp wrote:
         | Human beings and LLMs are essentially equivalent, and their
         | processes of "learning" are essentially equivalent, yet human
         | artists are not affected by tools like Nightshade. Odd.
        
           | danielbln wrote:
           | As another posted out, modern models like BLIP or GPT4V
           | aren't affected by this either.
        
           | pixl97 wrote:
           | Humans don't fall for optical illusions? News to me.
        
             | krapp wrote:
             | Nightshade isn't an optical illusion. It doesn't operate
             | within LLMs in any way equivalent to the way optical
             | illusion do in humans. A human who sees an optical illusion
             | does not have their perception nor ability to function
             | affected in the same way as an LLM.
             | 
             | I realize you and people like yourself have a lot invested
             | in furthering the narrative that LLMs are equivalent to
             | human beings in every relevant sense - especially any
             | _legal_ sense which might prevent you from profiting on the
             | copyrighted material that LLMs are trained on.
             | 
             | But it's a specious argument. LLMs are not human. They
             | don't think. They don't see. They don't reason. They aren't
             | entities possessed of self-awareness. The fact that one
             | cannot "poison" human cognition the way one can "poison"
             | LLM models is only one of numerous examples of that fact.
             | LLMs are software, not people.
             | 
             | And no, it isn't incumbent upon me to prove any of these
             | claims. It is incumbent upon you and those like yourself
             | who pollute every thread about AI with such claims to back
             | them up with even ordinary proof.
             | 
             | But you won't, because you can't. Such proof doesn't exist,
             | whereas evidence to the contrary exists in abundance.
             | Perhaps you're simply naive and assume that because LLMs
             | are capable of generating what appears to be intelligent
             | output, it must therefore be intelligent. Or perhaps you're
             | one of the many people who have a vested interest in
             | ensuring that LLMs are granted the same legal status as
             | human beings, so that you can make stronger copyright
             | claims on their output.
             | 
             | It doesn't matter. Every single time this argument comes up
             | people will chime in with the same tedious false
             | equivalencies. But it isn't clever, it's just getting
             | annoying.
        
         | adr1an wrote:
         | It's not the learning per se what's concerning here but the
         | ease of production (e.g. generate thousands of images in a day)
        
         | analog31 wrote:
         | This seems more like looking at other artists and being totally
         | incapacitated by some little touch in the painting you're
         | looking at.
        
           | adhesive_wombat wrote:
           | The Nam-shub of Hockney?
        
         | bakugo wrote:
         | A human artist cannot look at and memorize 100000 pictures in a
         | day, and cannot paint 100000 pictures in a day.
         | 
         | I am SO tired of this non-argument
        
           | ufocia wrote:
           | A human artist does not need to look at and memorize 100000
           | pictures in any span of time, period. Current AI does.
           | 
           | We needed huge amounts of human labor to fund and build
           | Versailles. I'm sure many died as a result. Now we have
           | machines that save many of those lives and labor.
           | 
           | What's your non-argument?
        
             | MattRix wrote:
             | The argument is that the humans producing the work should
             | be _willing_ participants. I don't think that's too much to
             | ask for.
        
         | amelius wrote:
         | You might as well compare a Xerox copier to a human.
        
         | schmichael wrote:
         | This is not one artist inspiring another. This is all artists
         | providing their work for free to immensely capitalized
         | corporations for the corporations sole profit.
         | 
         | People keep making metaphors as if the AI is an entity in this
         | transaction: it's not! The AI is only the mechanism by which
         | corporations launder IP.
        
           | thfuran wrote:
           | >This is all artists providing their work for free to
           | immensely capitalized corporations for the corporations sole
           | profit.
           | 
           | No, the artists would be within their rights to do that if
           | they chose to. This is corporations taking all the work of
           | all artists regardless of the terms under which it was
           | provided.
        
         | itronitron wrote:
         | Art schools don't teach people how to paint in different
         | artistic styles. They teach materials and technique.
        
         | jurynulifcation wrote:
         | Artists learning to innovate a trade defend their trade from
         | incursion by bloodthirsty, no-value-adding vampiric middle men
         | attempting to cut them out of the loop.
        
         | __loam wrote:
         | Human learning =/= machine learning
         | 
         | Most artists are happy to see more people getting into art and
         | joining the community. More artists means the skills of this
         | culture get passed down to the next generation.
         | 
         | Obviously a billion dollar corporation using their work to
         | create an industrial tool designed to displace them is very
         | different.
        
         | bradleyishungry wrote:
         | This is such a nothing argument. Yes, new artists are inspired
         | by other artists and sometimes make art similar to others, but
         | a huge part of learning and doing art is to find a unique
         | style.
         | 
         | But that's not even the important part of the argument. A lot
         | of artists work for commission, and are hired for their style.
         | If an AI can be trained without explicit permission from their
         | images, they lose work because a user can just prompt "in the
         | style of".
         | 
         | There's no real great solution, outside of law, because the
         | possibility of doing that is already here. But I've seen this
         | argument so much and it's just low effort
        
       | gumballindie wrote:
       | This is excellent. We need more tools like this, for text content
       | as well. For software we need GPL 4 with ML restrictions (make
       | your model open source or not at all). Potentially even DRM for
       | text.
        
       | gmerc wrote:
       | Doing the work to increase OpenAIs moat
        
         | Drakim wrote:
         | Obviously AIs can just train on images that aren't poisoned.
        
           | jsheard wrote:
           | Is it possible to reliably detect whether an image is
           | poisoned? If not then it achieves the goal of punishing
           | entities which indiscriminately harvest data.
        
             | Drakim wrote:
             | It's roughly in the same spot as reliably detecting if you
             | have permission to use the image for your data training set
             | in the first place.
             | 
             | If it doesn't matter, then neither does the poisoning
             | matter.
        
             | Kalium wrote:
             | You can use older images, collected from before the
             | "poisoning" software was released. Then you don't have to.
             | 
             | This, of course, assumes that "poisoning" actually works.
             | Glaze and Nightshade and similar are very much akin to the
             | various documented attacks on facial recognition systems.
             | The attack does not exploit some fundamental flaw in how
             | the systems work, but specific characteristics in a given
             | implementation and version.
             | 
             | This matters because it means that later versions and
             | models will inevitably not have the same vulnerabilities.
             | The result is that any given defensive transformation
             | should be expected to be only narrowly effective.
        
             | dist-epoch wrote:
             | AI's have learned much tougher things. You just need a
             | small data set of poisoned images to learn it's features.
        
       | Quanttek wrote:
       | This is fantastic. If companies want to create AI models, they
       | should license the content they use for the training data. As
       | long as there are not sufficient legal protections and the
       | EU/Congress do not act, tools like these can serve as a stopgap
       | and maybe help increase pressure on policymakers
        
         | Kuinox wrote:
         | > they should license the content they use for the training
         | data
         | 
         | You mean like OpenAI and Adobe ?
         | 
         | Only the free and open source models didn't licensed any
         | content for the training data.
        
           | galleywest200 wrote:
           | Adobe is training off of images stored in their cloud
           | systems, per their Terms of Service.
           | 
           | OpenAI has provided no such documentation or legal
           | guarantees, and it is still quite possible they scraped all
           | sorts of copyright materials.
        
             | devmor wrote:
             | There is in fact, an extreme amount of circumstantial
             | evidence that they intentionally and knowingly violated
             | copyright en mass. It's been quite a popular subject in
             | tech news the past couple weeks.
        
             | Kuinox wrote:
             | > OpenAI has provided no such documentation
             | 
             | OpenAI and Shutterstocks publicly announced their
             | collaboration, Shutterstocks sells AI generated images,
             | generated with OpenAI models.
        
             | luma wrote:
             | Google scrapes copyrighted material every day and then
             | presents that material to users in the form of excerpts,
             | images, and entire book pages. This has been ruled OK by
             | the courts. Scraping copyrighted information is not illegal
             | or we couldn't have search engines.
        
               | kevingadd wrote:
               | Google is not presently selling "we trained an AI on
               | people's art without permission, and you can type their
               | name in along with a prompt to generate a knockoff of
               | their art, and we charge you money for this". So it's not
               | really a 1:1 comparison, since there are companies
               | selling the thing I described right now.
        
               | luma wrote:
               | That pretty clearly would fall under transformative work.
               | It is not illegal for a human to paint a painting in the
               | style of, say, Banksy, and then sell the resulting
               | painting.
        
               | kevingadd wrote:
               | Humans and AI are not the same thing, legally or
               | physically. The law does not currently grant AI rights of
               | any kind.
        
               | luma wrote:
               | If a human isn't violating the law when doing that thing,
               | then how is the machine violating the law when it cannot
               | even hold copyright itself?
        
               | kevingadd wrote:
               | I'm not sure how to explain this any clearer: Humans and
               | machines are legally distinct. Machines don't have the
               | rights that humans have.
        
               | Ukv wrote:
               | Fair Use is the relevant protection and is not specific
               | to manual creation. Traditional algorithms (e.g: the
               | snippets, caching, and thumbnailing done by search
               | engines) are already covered by it.
        
               | estebank wrote:
               | In some locales sitting on the street writing down a list
               | of people coming and going is legal, but leaving a camera
               | pointed at the street isn't. Legislation like that makes
               | a distinction between an action by a person (which has
               | bounds on scalability) and mechanized actions (that do
               | not).
        
               | ufocia wrote:
               | What's not prohibited is allowed, at least in the US.
        
               | ufocia wrote:
               | Scraping is only legal if it's temporary and
               | transformational. If Google started selling the scrapped
               | images it would be a different story.
        
           | KeplerBoy wrote:
           | There is a small difference between any and all. OpenAI
           | certainly didn't licence all of the image they use for
           | training.
        
           | jazzyjackson wrote:
           | source for OpenAI paying anyone a dime? don't you think that
           | would set a precedent that everyone else deserves their cut?
        
         | popohauer wrote:
         | It's going to be interesting to see how the lawsuits against
         | OpenAI by content creators plays out. If the courts rule that
         | AI generated content is a derivative work of all the content it
         | was trained on it could really flip the entire gen AI movement
         | on its head.
        
           | luma wrote:
           | If it were a derivative work[1] (and sufficiently
           | transformational) then it's allowed under current copyright
           | law and might not be the slam dunk ruling you were hoping
           | for.
           | 
           | [1] https://en.wikipedia.org/wiki/Derivative_work
        
             | kevingadd wrote:
             | "sufficiently transformational" is carrying a lot of water
             | here. At minimum it would cloud the issue and might expose
             | anyone using AI to lawsuits where they'd potentially have
             | to defend each generated image.
        
               | ufocia wrote:
               | Sufficiently transformational only applies to
               | copyrightability, but AI works are not copyrightable
               | under current US law, so it's a non-issue.
        
             | popohauer wrote:
             | Oh, interesting, I didn't realize that's how it worked.
             | Thanks for the additional context around this. Guess it's
             | not as upending as I thought it could be.
        
             | ufocia wrote:
             | Not if it is AI generated. So far only humans can be
             | original enough to warrant copyrights, at least in the US .
             | 
             | BTW, the right to prepare derivative works belongs to the
             | copyright holder of the reference work.
             | 
             | I doubt that many AI works are in fact derivative works.
             | Sure, some bear enough similarity, but a gross majority
             | likely doesn't.
        
       | Kuinox wrote:
       | > More specifically, we assume the attacker:
       | 
       | > * can inject a small number of poison data (image/text pairs)
       | to the model's training dataset
       | 
       | I think thoes are bad assumption, labelling is more and more done
       | by some labelling AI.
        
       | popohauer wrote:
       | I'm glad to see tools like Nightshade starting to pop up to
       | protect the real life creativity of artists. I like AI art, but I
       | do feel conflicted about its potential long term effects towards
       | a society that no longer values authentic creativity.
        
         | Minor49er wrote:
         | Is the existence of the AI tool not itself a product of
         | authentic creativity? Does eliminating barriers to image
         | generation not facilitate authentic creativity?
        
           | 23B1 wrote:
           | No, it facilitates commoditization. Art - real art - is
           | fundamentally a human-to-human transaction. Once everyone can
           | fire perfectly-rendered perfectly-unique pieces of 'art' at
           | each other, it'll just become like the internet is today:
           | filled with extremely low-value noise.
           | 
           | Enjoy the short term novelty while you can.
        
             | fulladder wrote:
             | This is the right prediction. Once machines can generate
             | visual art, people will simply stop valuing it. We may see
             | increased interest in other forms of art, e.g., live
             | performance art like theater. It's hard to predict exactly
             | how it'll play out, but once something becomes cheap to
             | produce and widely available, it loses its luster for
             | connoisseurs and then gradually loses its luster for
             | everybody else too.
        
       | eddd-ddde wrote:
       | Isn't this just teaching the models how to better understand
       | pictures as humans do? As long as you feed them content that
       | looks good to a human, wouldn't they improve in creating such
       | content?
        
       | k__ wrote:
       | How long will this work?
        
         | kevingadd wrote:
         | It's an arms race the bigger players will win, and it
         | undermines the quality of the images. But it feels natural that
         | artists would want to do _something_ since they don 't feel
         | like anyone else is protecting them right now.
        
       | devmor wrote:
       | Baffling to see anyone argue against this technology when it is a
       | non-issue to any model by simply acquiring only training data you
       | have permission to use.
        
         | krapp wrote:
         | The reason people are arguing against this technology is that
         | no one is using them in the way you describe. They actually
         | wouldn't even be economically viable in that case.
        
           | devmor wrote:
           | If it is not economically viable for you to be ethical, then
           | you do not deserve economic success.
           | 
           | Anyone arguing against this technology following the line of
           | reasoning you present is operating in adverse to the good of
           | society. Especially if their only motive is economic
           | viability.
        
             | krapp wrote:
             | I feel like you read my comment and interpreted it in
             | exactly the opposite way it was intended because I agree
             | with you, and you're making the same point I was trying to
             | make.
        
         | Ukv wrote:
         | I think people 100% have the right to use this on their images,
         | but:
         | 
         | > simply acquiring only training data you have permission to
         | use
         | 
         | Currently it's generally infeasible to obtain licenses at the
         | required scale.
         | 
         | When attempting to develop a model that can describe photos for
         | visually impaired users, I had even tried to reach out to
         | obtain a license from Getty. They repeatedly told me that they
         | don't license images for machine learning[0].
         | 
         | I think it's easy to say "well too bad, it doesn't deserve to
         | exist" if you're just thinking about DALL-E 3, but there's a
         | huge number of positive and far less-controversial applications
         | of machine learning that benefit from web-scale pretraining and
         | foundation models - spam filtering, tumour segmentation, voice
         | transcription, language translation, defect detection, etc.
         | 
         | [0]: https://i.imgur.com/iER0BE2.png
        
       | ultimoo wrote:
       | would it have been that hard to include a sample photo and how it
       | looks with the nightshade filter side by side in a 3 page
       | document describing how it would look in great detail
        
       | jamesu wrote:
       | Long-term I think the real problem for artists will be
       | corporations generating their own high quality targeted datasets
       | from a cheap labor pool, completely outcompeting them by a
       | landslide.
        
         | ufocia wrote:
         | It will democratize art.
        
           | 23B1 wrote:
           | then it won't be art anymore, it'll just be mountains of shit
           | 
           | sorta like what the laptop did for writing
        
         | jdietrich wrote:
         | In the short-to-medium term, we're seeing huge improvements in
         | the data efficiency of generative models. We haven't really
         | started to see self-training in diffusion models, which could
         | improve data efficiency by orders of magnitude. Current models
         | are good at generalisation and are getting better at an
         | incredible pace, so any efforts to limit the progress of AI by
         | restricting access to training data is a speedbump rather than
         | a roadblock.
        
       | msp26 wrote:
       | >Like Glaze, Nightshade is computed as a multi-objective
       | optimization that minimizes visible changes to the original
       | image.
       | 
       | It's still noticeably visible.
        
         | kevingadd wrote:
         | Yeah, I've seen multiple artists complain about how glazing
         | reduces image quality. It's very noticeable. That seems like an
         | unavoidable problem given how AI is trained on images right
         | now.
        
       | dist-epoch wrote:
       | Remember when the music industry tried to use technology to stop
       | music pirating?
       | 
       | This will work about as well...
       | 
       | Oh, I forget, fighting music pirating was considered an evil
       | thing to do on HN. "pirating is not stealing, is copyright
       | infringement", right? Unlike training neural nets on internet
       | content which of course is "stealing".
        
         | kevingadd wrote:
         | FWIW, you're the only use of the word "steal" in this comment
         | thread.
         | 
         | Many people would in fact argue that training AI on people's
         | art without permission is copyright infringement, since the
         | thing it (according to detractors) does is infringe copyright
         | by generating knockoffs of people's work.
         | 
         | You will see some people use the term "stealing" but they're
         | usually referring to how these AIs are sold/operated by for-
         | profit companies that want to make money off artists' work
         | without compensating them. I think it's not unreasonable to
         | call that "stealing" even if the legal definition doesn't
         | necessarily fit 100%.
         | 
         | The music industry is also not really a very good comparison
         | point for independent artists... there is no Big Art equivalent
         | that has a stranglehold on the legislature and judiciary like
         | the RIAA/MPAA do.
        
         | snakeyjake wrote:
         | A more apt comparison is sampling.
         | 
         | AI is sampling other's works.
         | 
         | Musicians can and do sample. They also obtain clearance for
         | commercial works, pay royalties if required, AND credit the
         | samples if required.
         | 
         | AI "art" does none of that.
        
           | Minor49er wrote:
           | Musicians overwhelmingly do not even attempt to clear
           | samples. This also isn't a great comparison since samples are
           | taken directly out of the audio, not turned into a part of a
           | pattern used to generate new sounds like what AI generators
           | do with images
        
             | snakeyjake wrote:
             | Commercial musicians do not?
             | 
             | You sure about that?
             | 
             | Entire legal firm empires have been built on the licensing,
             | negotiations, and fees that make up the industry.
             | 
             | I'm ain't talking about some dude on YouTube or Soundcloud.
             | Few people care about some rando on Soundcloud. Those moles
             | aren't big enough to whack. Vanilla Ice and MC Hammer were.
             | OpenAI is as well.
             | 
             | There's even a company that specializes in sample
             | clearance: https://sampleclearance.com
             | 
             | More info: https://www.soundonsound.com/sound-
             | advice/sample-clearance
             | 
             | Also:
             | 
             | >not turned into a part of a pattern used to generate new
             | sounds like what AI generators do with images
             | 
             | This is demonstrably false. Multiple individuals have
             | repeatedly been able to extract original images from AI
             | generators.
             | 
             | Here's one-- Extracting Training Data from Diffusion Models
             | https://arxiv.org/abs/2301.13188
             | 
             | Text, too: https://arxiv.org/abs/2311.17035
        
         | xigoi wrote:
         | The difference is that "pirating" is mostly done by individuals
         | for private use, whereas training is mostly done by
         | megacorporations looking to make more money.
        
       | 542458 wrote:
       | This seems to introduce levels of artifacts that many artists
       | would find unacceptable:
       | https://twitter.com/sini4ka111/status/1748378223291912567
       | 
       | The rumblings I'm hearing are that this a) barely works with
       | last-gen training processes b) does not work at all with more
       | modern training processes (GPT-4V, LLaVA, even BLIP2 labelling
       | [1]) and c) would not be especially challenging to mitigate
       | against even should it become more effective and popular. The
       | Authors' previous work, Glaze, also does not seem to be very
       | effective despite dramatic proclamations to the contrary, so I
       | think this might be a case of overhyping an academically
       | interesting but real-world-impractical result.
       | 
       | [1]: Courtesy of /u/b3sn0w on Reddit: https://imgur.com/cI7RLAq
       | https://imgur.com/eqe3Dyn https://imgur.com/1BMASL4
        
         | brucethemoose2 wrote:
         | Yeah. At worst a simple img2img diffusion step would mitigate
         | this, but just eyeballing the examples, traditional denoisers
         | would probably do the job?
         | 
         | Denoising is probably a good preprocessing step anyway.
        
         | gedy wrote:
         | Maybe it's more about "protecting" images that artists want to
         | publicly share to advertise work, but it's not appropriate for
         | final digital media, etc.
        
           | sesm wrote:
           | In short, anti-AI watermark.
        
         | pimlottc wrote:
         | I can't really see any difference in those images on the
         | Twitter example when viewing it on mobile
        
           | pxc wrote:
           | I don't have great vision, but me neither. They're
           | indistinguishable to me (likewise on mobile).
        
           | milsorgen wrote:
           | It took me a minute too but on the fast you can see some
           | blocky artifacting by the elbow and a few spots elsewhere
           | like curtain upper left.
        
           | Keyframe wrote:
           | look at the green drapes to the right, or any large uniform
           | colored space. It looks similar to bad JPEG artifacts.
        
           | 0xcde4c3db wrote:
           | I didn't see it immediately either, but there's a _ton_ of
           | added noise. The most noticeable bit for me was near the
           | standing person 's bent elbow, but there's a lot more that
           | becomes obvious when flipping back and forth between browser
           | tabs instead of swiping on Twitter.
        
           | vhcr wrote:
           | The animation when you change images makes it harder to see
           | the difference, I opened the three images each in its own tab
           | and the differences are more apparent when you change between
           | each other instantly.
        
             | dontupvoteme wrote:
             | One of the few times a 'blink comparator' feature in image
             | viewers would be useful!
        
           | charcircuit wrote:
           | The gradient on the bat has blocks in it instead of being
           | smooth.
        
           | josefx wrote:
           | Something similar to jpeg artifacts on any surface with a
           | normally smooth color gradient, in some cases rather
           | significant.
        
         | GaryNumanVevo wrote:
         | The artifacts are a non-issue. It's intended images with
         | nightshade are intended to be silently scrapped and avoid human
         | filtering.
        
           | minimaxir wrote:
           | The artifacts are extremely an issue for artists who don't
           | want their images damaged for the possibility of them not
           | being trained by AI.
           | 
           | It's a bad tradeoff.
        
             | GaryNumanVevo wrote:
             | Nightshaded images aren't intended for portfolios. They're
             | mean to be uploaded enmasse and scraped later.
        
               | AJ007 wrote:
               | To where? A place no one sees them and they aren't
               | scraped?
        
               | filleduchaos wrote:
               | I think the point is that they're akin to a watermark.
               | 
               | Even before the current AI boom, plenty of artists have
               | wanted to _showcase_ their work /prove that it exists
               | without necessarily making the highest quality original
               | file public.
        
               | Diti wrote:
               | Most serious artists I know (at least in my community)
               | release their high-quality images on Patreon or similar.
        
               | pgeorgi wrote:
               | For example in accounts on image sites that are exposed
               | to suspected scrapers but not to others. Scrapers will
               | still see the real data, but they'll also run into stuff
               | designed to mix up the training process.
        
           | the8472 wrote:
           | do you mean scrapped or scraped?
        
             | GaryNumanVevo wrote:
             | scraped
        
       | TenJack wrote:
       | Wonder if the AI companies are already so far ahead that they can
       | use their AI to detect and avoid any poisoning?
        
       | alentred wrote:
       | With this "solution" it looks like the world of art enters the
       | cat-and-mouse game the ad blockers were playing for the last
       | decade or two.
        
         | isodev wrote:
         | I just tested it with Azure AI image classification and it
         | worked - so this cat is yet to adapt to the mouse's latest
         | idea.
         | 
         | I still feel it is absolutely wrong to roam around the internet
         | and scrape images (without consent) in order to power one's
         | cash cow AI. I hope more methods to protect artworks (including
         | audio and other formats) become more accessible.
        
         | KTibow wrote:
         | I might be missing something because I don't know much about
         | the architecture of either Nightshade or AI art generators, but
         | I wonder if you could try to have a GAN-like architecture (an
         | extra model trying to trick the model) for the part of the
         | generator that labels images to build resistance to Nightshade-
         | like filters.
        
           | the8472 wrote:
           | It doesn't even have to be a full GAN, you only need to train
           | the discriminator side to filter out the data. Clean
           | reference images + Nightshade would be the generator side.
        
       | ukuina wrote:
       | Won't a simple downsample->upsample be the antidote?
        
         | wizzwizz4 wrote:
         | How do you train your upsampler? (Also: why are you seeking to
         | provide an "antidote"?)
        
           | spookie wrote:
           | Why would you train one?
        
           | MrNeon wrote:
           | >why are you seeking to provide an "antidote"
           | 
           | To train a model on the data.
        
             | krapp wrote:
             | Get permission to use the data.
        
               | MrNeon wrote:
               | Got all the permission I need when it was put on a
               | publicly accessible server.
        
               | wizzwizz4 wrote:
               | That's not really how consent works.
               | 
               | I hope this is a special exception you've made, rather
               | than your general approach towards interacting with your
               | fellows.
        
               | MrNeon wrote:
               | That is how consent works.
        
               | xigoi wrote:
               | That's not how copyright works.
        
               | MrNeon wrote:
               | Tell me where it says training a model is infringing on
               | copyright.
        
               | xigoi wrote:
               | How is creating a derivative of someone's work and
               | selling it not copyright infringement?
        
               | MrNeon wrote:
               | Who said anything about creating a derivative? Surely you
               | don't mean to say that any image created with a model
               | trained on copyrighted data counts as a derivative of it.
               | Edit: Or worse, that the model itself is derivative,
               | something so different from an image must count as
               | transformative work!.
               | 
               | Also who said anything about selling?
        
               | xigoi wrote:
               | The model itself is a derivative. And it's not really
               | that transformative, it's basically the input data
               | compressed with highly lossy compression.
               | 
               | > Also who said anything about selling?
               | 
               | All the corporations that are offering AI as a paid
               | service?
        
           | klyrs wrote:
           | > why are you seeking to provide an "antidote"
           | 
           | I think it's worthwhile for such discussion to happen in the
           | open. If the tool can be defeated through simple means, it's
           | better for everybody to know that, right?
        
             | wizzwizz4 wrote:
             | It would be better for _artists_ to know that. But Hacker
             | News is not a forum of visual artists: it 's a forum of
             | hackers, salaried programmers, and venture capitalists.
             | Telling the bad guys about vulnerabilities isn't
             | responsible disclosure.
             | 
             | Causing car crashes isn't hard (https://xkcd.com/1958/).
             | That doesn't mean Car Crash(tm) International(r)'s
             | decision-makers know how to do it: they probably don't even
             | know what considerations go into traffic engineering, or
             | how anyone can just buy road paint from that shop over
             | there.
             | 
             | It's everybody's responsibility to keep Car Crash(tm)
             | International(r) from existing; but failing that, it's
             | everybody's responsibility to not tell them how to cause
             | car crashes.
        
               | MrNeon wrote:
               | The tears of artists and copyright evangelists is so
               | sweet.
        
           | ukuina wrote:
           | I apologize. I was trying to respond to inflammatory language
           | ("poison") with similarly hyperbolic terms, and I should know
           | better than to do that.
           | 
           | Let me rephrase: Would AI-powered upscaling/downscaling (not
           | a simple deterministic mathematical scaling) not defeat this
           | at a conceptual level?
        
         | jdiff wrote:
         | No, it's resistant to transformation. Rotation, cropping,
         | scaling, the image remains poisonous. The only antidote known
         | currently is active artist cooperation.
        
           | CaptainFever wrote:
           | Or Img2Img.
        
       | xg15 wrote:
       | I wonder how this tool works if it's actually model independent.
       | My understanding so far was that in principle each possible model
       | has _some_ set of pathological inputs for which the
       | classification will be different than what a user sees - but that
       | this set is basically different for each model. So did they
       | actually manage to build an  "universal" poison? If yes, how?
        
       | peter_d_sherman wrote:
       | To protect an individual's image property rights from image
       | generating AI's -- wouldn't it be simpler for the IETF (or other
       | standards-producing group) to simply create an
       | 
       |  _AI image exclusion standard_
       | 
       | , similar to _" robots.txt"_ -- which would tell an AI data-
       | gathering web crawler that a given image or set of images -- was
       | off-limits for use as data?
       | 
       | https://en.wikipedia.org/wiki/Robots.txt
       | 
       | https://www.ietf.org/
        
         | potatolicious wrote:
         | Entities training models have no incentive to follow such
         | metadata. If we accept the premise that "more input -> better
         | models" then there's every reason to ignore non-legally-binding
         | metadata requests.
         | 
         | Robots.txt survived because the use of it to gatekeep valuable
         | goodies was never widespread. Most sites _want_ to be indexed,
         | most URLs excluded by the robots file are not of interest to
         | the search engine anyway, and use of robots to prevent crawling
         | actually interesting pages is marginal.
         | 
         | If there was ever genuine uptake in using robots to gatekeep
         | the _really good stuff_ search engines would 've stopped
         | respecting it pretty much immediately - it isn't legally
         | binding after all.
        
           | peter_d_sherman wrote:
           | >Entities training models have no incentive to follow such
           | metadata. If we accept the premise that "more input -> better
           | models" then there's every reason to ignore non-legally-
           | binding metadata requests.
           | 
           | Name two entities that were asked to stop using a given
           | individuals' images that failed to stop using them after the
           | stop request was issued.
           | 
           | >Robots.txt survived because the use of it to gatekeep
           | valuable goodies was never widespread. Most sites want to be
           | indexed, most URLs excluded by the robots file are not of
           | interest to the search engine anyway, and use of robots to
           | prevent crawling actually interesting pages is marginal.
           | 
           | Robots.txt survived because it was a "digital signpost" a
           | "digital sign" -- sort of like the way you might put a
           | "Private Property -- No Trespassing" sign in your yard.
           | 
           | Most moral/ethical/lawful people -- will obey that sign.
           | 
           | Some might not.
           | 
           | But the some that might not -- probably constitute about a
           | 0.000001% minority of the population, whereas the majority
           | that do -- probably constitute about 99.99999% of the
           | population.
           | 
           | "Robots.txt" is a sign -- much like a road sign is.
           | 
           | People can obey them -- or they can ignore them -- but they
           | can ignore them only at their own peril!
           | 
           | It's a sign which provides a hint for what the right thing to
           | do in a certain set of circumstances -- which is what the
           | _Law_ is; which is what the majority of _Laws_ are.
           | 
           | People can obey them -- or they can choose to ignore them --
           | but _only at their own peril!_
           | 
           | Most will choose to obey them. Most will choose to "take the
           | hint", proverbially speaking!
           | 
           | A few might not -- but that doesn't mean the majority won't!
           | 
           | >If there was ever genuine uptake in using robots to gatekeep
           | the really good stuff search engines would've stopped
           | respecting it pretty much immediately - it isn't legally
           | binding after all.
           | 
           | Again, _name two entities that were asked to stop using a
           | given individuals ' images that failed to stop using them
           | after the stop request was issued._
        
         | xg15 wrote:
         | And then what? The scrapers themselves already happily ignore
         | copyright, they won't be inclined to obey a no-ai.txt. So
         | someone would have to enforce the standard. Currently I see no
         | organisation who would be willing to do this or even just
         | technologically able - as even just detecting such scrapers is
         | an extremely hard task.
         | 
         | Nevertheless, I hope that at some not-so-far point in the
         | future there will be more legal guidance about this kind of
         | stuff, i.e. it will be made clear that scraping violates
         | copyright. This still won't solve the problem of detectability
         | but it would at least increase the risk of scrapers, _should_
         | they be caught.
        
           | peter_d_sherman wrote:
           | >The scrapers themselves already happily ignore copyright,
           | they won't be inclined to obey a no-ai.txt.
           | 
           | Name two entities that were asked to stop using a given
           | individuals' images that failed to stop using them after the
           | stop request was issued.
           | 
           | >Currently I see no organisation who would be willing to do
           | this or even just technologically able - as even just
           | detecting such scrapers is an extremely hard task.
           | 
           | // Part of Image Web Scraper For AI Image Generator ingestion
           | psuedocode:
           | 
           | if fileExists("no-ai.txt") {                 // Abort image
           | scraping for this site -- move on to the next site
           | 
           | } else {                 // Continue image scraping for this
           | site
           | 
           | };
           | 
           | See? Nice and simple!
           | 
           | Also -- let me ask you this -- what happens to the
           | intellectual property (or just plain property) rights of
           | Images on the web _after_ the author dies? Or say, 50 years
           | (or whatever the legal copyright timeout is) after the author
           | dies?
           | 
           | Legal grey area perhaps?
           | 
           | Also -- what about Images that exist in other legal
           | jurisdictions -- i.e., other countries?
           | 
           | How do we know what set of laws are to apply to a given
           | image?
           | 
           | ?
           | 
           | Point is: If you're going to endorse and/or construct a legal
           | framework (and have it be binding -- keep in mind you're
           | going to have to traverse the legal jurisdictions of many
           | countries, _many countries_!) -- you might as well consider
           | such issues.
           | 
           | Also -- at least in the United States, we have Juries that
           | can override any Law (Separation of Powers) -- that is, that
           | which is considered "legally binding" -- may not be quite so
           | "legally binding" if/when properly explained to a proper jury
           | in light of extenuating (or just plain other) circumstances!
           | 
           | So kindly think of these issues prior to making all-
           | encompasing proposals as to what you think should be "legally
           | binding" or not.
           | 
           | I comprehend that you are just trying to solve a problem; I
           | comprehend and empathize; but the problem might be a bit
           | greater than you think, and there might be one if not
           | serveral unexplored partial/better (since no one solution,
           | legal or otherwise, will be all-encompassing) solutions --
           | because the problem is so large in scope -- but all of these
           | issues must be considered in parallel -- or errors, present
           | or future will occur...
        
             | xg15 wrote:
             | > _Part of Image Web Scraper For AI Image Generator
             | ingestion psuedocode:..._
             | 
             | Yes, and who is supposed to run that code?
             | 
             | > _Name two entities that were asked to stop using a given
             | individuals ' images that failed to stop using them after
             | the stop request was issued._
             | 
             | Github? OpenAI?[1] Stable Diffusion?[2] LAION?[3] What do
             | you think why there are currently multiple high-profile
             | lawsuits ongoing about exactly that topic?
             | 
             | Besides, that's not how things work. Training a foundation
             | model takes months and currently costs a fortune in
             | hardware and power - and once the model is trained, there
             | is, as of now, no way to remove individual images from the
             | model without restraining. So in practical terms it's
             | impossible to remove an image if it has already been
             | trained on.
             | 
             | So the better question would be, name two entities who have
             | ignored an artist's request to not include their image when
             | they encountered it the first time. It's still a trick
             | question though because the point is that scraping happens
             | in private - we can't know which images were scraped
             | without access to the training data. The one indication
             | that it was probably scraped is if a model manages to
             | reproduce it verbatim - which is the basis for some of the
             | above lawsuits.
             | 
             | [1] https://www.theverge.com/2022/11/8/23446821/microsoft-
             | openai...
             | 
             | [2] https://www.theverge.com/2023/2/6/23587393/ai-art-
             | copyright-...
             | 
             | [3] https://www.heise.de/hintergrund/Stock-photographer-
             | sues-AI-...
        
       | GaggiX wrote:
       | These methods like Glaze usually works by taking the original
       | image chaging the style or content and then apply LPIPS loss on
       | an image encoder, the hope is that if they can deceive a CLIP
       | image encoder it would confuse also other models with different
       | architecture, size and dataset, while changing the original image
       | as little as possible so it's not too noticeable to a human eye.
       | To be honest I don't think it's a very robust technique, with
       | this one they claim that a model instead of seeing for example a
       | cow on grass the model will see a handbag, if someone has access
       | to GPT4-V I want to see if it's able to deceive actually big
       | image encoders (usually more aligned to the human vision).
       | 
       | EDIT: I have seen a few examples with GPT-4 V and how I imagine
       | it wasn't deceived, I doubt this technique can have any impact on
       | the quality of the models, the only impact that this could
       | potentially have honestly is to make the training more robust.
        
       | garg wrote:
       | Each time there is an update to training algorithms and in
       | response poisoning algorithms, artists will have to re-glaze, re-
       | mist, and re-nightshade all their images?
       | 
       | Eventually I assume the poisoning artifacts introduced in the
       | images will be very visible to humans as well.
        
       | brucethemoose2 wrote:
       | What the article doesn't illustrate is that it destroys fine
       | detail in the image, even in the thumbnails of the reference
       | paper: https://arxiv.org/pdf/2310.13828.pdf
       | 
       | Also... Maybe I am naive, but it seems rather trivial to work
       | around with a quick prefilter? I don't know if tradition
       | denoising would be enough, but worst case you could run img2img
       | diffusion.
       | 
       | reply
        
         | GaryNumanVevo wrote:
         | The poisoned images aren't intended to be viewed, rather
         | scraped and pass a basic human screen. You wouldn't be able to
         | denoise as you'd have to denoise the entire dataset, the entire
         | point is that these are virtually undetectable from typical
         | training set examples, but they can push prompt frequencies
         | around at will with a small number of poisoned examples.
        
           | minimaxir wrote:
           | > You wouldn't be able to denoise as you'd have to denoise
           | the entire dataset
           | 
           | Doing that requires much less compute than training a large
           | generative image model.
        
             | GaryNumanVevo wrote:
             | > the entire point is that these are virtually undetectable
             | from typical training set examples
             | 
             | I'll repeat this point for clarity. After going over the
             | paper again, denoising shouldn't affect this attack, it's
             | the ability of plausible images to not be detected by human
             | or AI discriminators (yet)
        
             | brucethemoose2 wrote:
             | I guess the idea is that the model trainers are ignorant of
             | this and wouldn't know to preprocess/wouldn't bother?
             | 
             | That's actually quite plausible.
        
       | enord wrote:
       | I'm completely flabbergasted by the number of comments implying
       | copyright concepts such as "fair use" or "derivative work" apply
       | to trained ML models. Copyright is for _people_, as are the
       | entailing rights, responsibilities and exemptions. This has gone
       | far beyond anthropomorphising and we need to like get it
       | together, man!
        
         | ronsor wrote:
         | You act like computers and ML models aren't just tools used by
         | people.
        
         | CaptainFever wrote:
         | No one is saying a model is the legal entity. The legal
         | entities are still people and corporations.
        
       | tigrezno wrote:
       | Do not fight the AI, it's a lost cause, embrace it.
        
       | gweinberg wrote:
       | For this to work, wouldn't you have to have an enormous number of
       | artists collaborating on "poisoning" their images the same way
       | (cow to handbag) while somehow keeping it secret form ai trainers
       | that they were doing this? It seems to me that even if the
       | technology works perfectly as intended, you're effectively just
       | mislabeling a tiny fraction of the training data.
        
       | ang_cire wrote:
       | Setting aside the efficacy of this tool, I would be very
       | interested in the legal implications of putting designs in your
       | art that could corrupt ML models.
       | 
       | For instance, if I set traps in my home which hurt an intruder we
       | are both guilty of crimes (traps are illegal and are never
       | considered self defense, B&E is illegal).
       | 
       | Would I be responsible for corrupting the AI operator's data if I
       | intentionally include adversarial artifacts to corrupt models, or
       | is that just DRM to legally protect my art from infringement?
       | 
       | edit:
       | 
       | I replied to someone else, but this is probably good context:
       | 
       | DRM is legally allowed to disable or even corrupt the software or
       | media that it is protecting, if it detects misuse.
       | 
       | If an adversarial-AI tool attacks the model, it then becomes a
       | question of whether the model, having now incorporated my
       | protected art, is now "mine" to disable/corrupt, or whether it is
       | in fact out of bounds of DRM.
       | 
       | So for instance, a court could say that the adversarial-AI
       | methods could only actively prevent the training software from
       | incorporating the protected media into a model, but could not
       | corrupt the model itself.
        
         | anigbrowl wrote:
         | None whatsoever. There is no right to good data for model
         | training, nor does any contractual relationship exist between
         | you and and a model builder who scrapes your website.
        
           | ang_cire wrote:
           | If you're assuming this is open-shut, you're wrong. I asked
           | this specifically as someone who works in security. A court
           | is going to have to decide where the line is between DRM and
           | malware in adversarial-AI tools.
        
             | ufocia wrote:
             | Worth trying but I doubt it unless we establish a right to
             | train.
        
             | anigbrowl wrote:
             | I'm not. Malware is one thin, passive data poisoning is
             | another. Mapmakers have long used such devices to
             | detect/deter unwanted copying. In the US such 'trap
             | streets' are not protected by copyright, but nor do they
             | generate liability.
             | 
             | https://en.wikipedia.org/wiki/Trap_street
        
         | kortilla wrote:
         | That's like asking if lying on a forum is illegal
        
           | ang_cire wrote:
           | No, it's much closer to (in fact, it is simply) asking if
           | adversarial AI tools count as DRM or as malware. And a court
           | is going to have to decide whether the model and or its
           | output counts as separate software, which it is illegal for
           | DRM to intentionally attack.
           | 
           | DRM can, for instance, disable its own parent tool (e.g. a
           | video game) if it detects misuse, but it can't attack the
           | host computer or other software on that computer.
           | 
           | So is the model or its output, having been trained on my art,
           | a byproduct of my art, in which case I have a legal right to
           | 'disable' it, or is it separate software that I don't have a
           | right to corrupt?
        
         | GaryNumanVevo wrote:
         | How would that situation be remotely related?
        
         | CaptainFever wrote:
         | Japan is considering it, I think?
         | https://news.ycombinator.com/item?id=38615280
        
       | etchalon wrote:
       | My hope is these type of "poisoning tools" become ubiquitous for
       | all content types on the web, forcing AI companies to, you know,
       | license things.
        
       | mjfl wrote:
       | Another way would be, for every 1 piece of art you make, post 10
       | AI generated arts, so that the SNR is really bad.
        
       | Duanemclemore wrote:
       | For visual artists who don't want visible artifacting in the art
       | they feature online, would it be possible to upload these
       | alongside your un-poisoned art, but have them only hanging out in
       | the background? So say having one proper copy and a hundred
       | poisoned copies in the same server, but only showing the un-
       | poisoned one?
       | 
       | Might this "flood the zone" approach also have -some- efficacy
       | against human copycats?
        
       | marcinzm wrote:
       | This feels like it'll actually help make AI models better versus
       | worse once they train on these images. Artists are basically, for
       | free, creating training data that conveys what types of noise
       | does not change the intended meaning of the image to the artist
       | themselves.
        
       | Albert931 wrote:
       | Artist are now fully dependent on Software Engineers for
       | protecting the future of their career lol
        
       | zirgs wrote:
       | Does it survive AI upscaling or img2img? If not - then it's
       | useless. Nobody trains AI models without any preprocessing. This
       | is basically a tool for 2022.
        
       | r3trohack3r wrote:
       | The number of people who are going to be able to produce high
       | fidelity art with off the shelf tools in the near future is
       | unbelievable.
       | 
       | It's pretty exciting.
       | 
       | Being able to find a mix of styles you like and apply them to new
       | subjects to make your own unique, personalized, artwork sounds
       | like a wickedly cool power to give to billions of people.
        
         | __loam wrote:
         | And we only had to alienate millions of people from their labor
         | to do it.
        
           | DennisAleynikov wrote:
           | Yeah, sadly those millions of people don't matter in the
           | grand scheme of things and were never going to profit off
           | their work long term
        
             | r3trohack3r wrote:
             | What a bummer of a thing to say.
             | 
             | Those millions/billions of people matter a great deal.
        
           | mensetmanusman wrote:
           | Is this utilitarianism?
        
           | r3trohack3r wrote:
           | Absolutely agree we should allow people to accumulate equity
           | through effective allocation of their labor.
           | 
           | And I also agree that we shouldn't build systems that
           | alienate people from that accumulated equity.
        
         | 23B1 wrote:
         | It'll be about as wickedly tool as the ability to get on the
         | internet, e.g. commoditized, transactional, and boring.
        
           | sebzim4500 wrote:
           | I know this is an unpopular thing to say these days, but I
           | still think the internet is amazing.
           | 
           | I have more access to information now than the most powerful
           | people in the world did 40 years ago. I can learn about
           | quantum field theory, about which pop star is allegedly
           | fucking which other pop star, etc.
           | 
           | If I don't care about the law I can read any of 25 million
           | books or 100 million scientific papers all available on
           | Anna's Archive for free in seconds.
        
         | kredd wrote:
         | In terms of art, population tends to put value not on the
         | result, but origin and process. People will just look down on
         | any art that's AI generated in a couple of years when it
         | becomes ubiquitous.
        
           | MacsHeadroom wrote:
           | Nope, but I already look down on artists who refuse to
           | integrate generative AI into their processes.
        
             | mplewis wrote:
             | Can you share some of the art you've made with generative
             | AI?
        
             | jurynulifcation wrote:
             | Cool, who are you?
        
             | MisterBastahrd wrote:
             | People who use generative AI in their processes are not
             | artists.
        
           | redwall_hp wrote:
           | This is already the case. Art is a process, a form of human
           | expression, not an end result.
           | 
           | I'm sure OpenAI's models can shit out an approximation of a
           | new Terry Pratchett or Douglas Adams novel, but nobody with
           | any level of literary appreciation would give a damn unless
           | fraud was committed to trick readers into buying it. It's not
           | the author's work, and there's no human message behind it.
        
         | falcolas wrote:
         | > Being able to find a mix of styles you like and apply them to
         | new subjects to make your own unique, personalized, artwork
         | sounds like a wickedly cool power to give to billions of
         | people.
         | 
         | And in the process, they will obviate the need for Nightshade
         | and similar tools.
         | 
         | AI models ingesting AI generated content does the work of
         | destroying the models all by itself. Have a look at "Model
         | Collapse" in relation to generative AI.
        
       | efitz wrote:
       | This is the DRM problem again.
       | 
       | However much we might wish that it was not true, ideas are not
       | rivalrous. If you share an idea with another person, they now
       | have that idea too.
       | 
       | If you share words on paper, then someone with eyes and a brain
       | might memorize them (or much more likely, just grasp and retain
       | the ideas conveyed in the words).
       | 
       | If you let someone hear your music, then the ideas (phrasing,
       | style, melody, etc) in that music are transferred.
       | 
       | If you let people see a visual work, then the stylistic and
       | content elements of that work are potentially absorbed by the
       | audience.
       | 
       | We have copyright to protect specific embodiments, but mostly if
       | you try to share ideas with others without letting them use the
       | ideas you shared, then you are in for a life of frustration and
       | escalating arms race.
       | 
       | I completely sympathize with anyone who had a great idea and
       | spent a lot of effort to realize it. If I invented/created
       | something awesome I would be hurt and angry if someone "copied"
       | it. But the hard cold reality is that you cannot "own" an idea.
        
         | freeAgent wrote:
         | This doesn't stop anyone from viewing or scraping the work,
         | though, so in no way is it DRM. It just causes certain methods
         | of computer interpretation of an image to interpret it in an
         | odd way vs. human viewers. They can still learn from them.
        
           | avhon1 wrote:
           | It absolutely is DRM, just a different form than media
           | encryption. It's a purely-digital mechanism of enforcing
           | rights.
        
             | freeAgent wrote:
             | It doesn't enforce any rights. It modifies the actual
             | image. Humans and computers still have equal, open access
             | to it.
        
               | efitz wrote:
               | It's designed to restrict the purposes for which the
               | consumer can use the work. It is exactly like DRM in this
               | way.
        
               | freeAgent wrote:
               | How does it stop you from using an image however you
               | want?
        
               | freeAgent wrote:
               | To be clear, you can still train AI with these images.
               | Nothing is stopping you.
        
               | renewiltord wrote:
               | That's true of almost all DRM, isn't it? Even for the
               | most annoying form that is always-online DRM, everyone is
               | provided the same access to the bytes that form a game.
               | You and I have the same bytes of game.
               | 
               | It's the purpose of some of those bytes that turns it
               | into DRM.
        
               | freeAgent wrote:
               | No, it's not the same. The game is non-functional without
               | the proper keys/authorization whereas images run through
               | this algorithm are still images that anyone and any
               | computer can view in the same manner without any
               | authentication.
        
         | tsujamin wrote:
         | Being able to fairly monetise your creative work and put food
         | on the table is a _bit_ rivalrous though, don't you think?
        
           | efitz wrote:
           | No, I disagree. There is no principle of the universe or
           | across human civilizations that says that you have a right to
           | eat because you produced a creative work.
           | 
           | The way societies work is that the members of the society
           | contribute and benefit in prescribed ways. Societies with
           | lots of excess production may at times choose to allow
           | creative works to be monetized. Societies without much
           | surplus are extremely unlikely to do so, eg a society with
           | not enough food for everyone to eat in the middle of a famine
           | is extremely unlikely to feed people who only create art;
           | those people will have to contribute in some other way.
           | 
           | I think it is a very modern western idea (less than a century
           | old) that _many_ artists can dedicate themselves solely to
           | producing the art they want to produce. In all other times
           | artists either had day jobs or worked on commission.
        
             | jrflowers wrote:
             | > There is no principle of the universe or across human
             | civilizations
             | 
             | Can you list the principles across human civilizations?
        
               | juunpp wrote:
               | He can also presumably list the principles of the
               | universe.
        
           | renewiltord wrote:
           | The tragedy of "your business model is not my problem" as a
           | spreading idea is that while you're right since distribution
           | is where the money is (not creation), intellectual property
           | is de-facto weakened today and IP piracy is widely considered
           | an acceptable thing.
        
         | throwoutway wrote:
         | I don't see the parallel between this offensive tool and DRM. I
         | could, say buy a perpetual license to an image from the artist,
         | so that I can print it and put it on my wall, while it can
         | simultaneously be poisonous to an AI system. I can even steal
         | it and print it, while it is still poisonous to an AI system.
         | 
         | The closest parallel I can think of is that humans can ingest
         | chocolate but dogs should not.
        
           | efitz wrote:
           | A huge amount of DRM effort has been spent in the
           | watermarking area, which is similar, but not exactly the
           | same.
        
           | jdietrich wrote:
           | What you've described is the literal, dictionary definition
           | of Digital Rights Management - a technology to restrict the
           | use of a digital asset beyond the contractually-agreed terms.
           | Copying is only one of many uses that the copyright-holder
           | may wish to prevent. The regional lockout on a DVD had
           | nothing to do with copy-protection, but it was still DRM.
        
           | gwbas1c wrote:
           | It's about the arm's race: DRM will always be cracked (with a
           | sufficiently motivated customer.) AI poisoning will always be
           | cracked (with a sufficiently motivated crawler.)
        
         | xpe wrote:
         | Many terms of art from economics are probably not widely-known
         | here.
         | 
         | > In economics, a good is said to be rivalrous or a rival if
         | its consumption by one consumer prevents simultaneous
         | consumption by other consumers, or if consumption by one party
         | reduces the ability of another party to consume it. -
         | Wikipedia: Rivalry (economics)
         | 
         | Also: we should recognize that stating something as rivalrous
         | or not is _descriptive_ (what exists) not _normative_ (what
         | should be).
        
         | xpe wrote:
         | > But the hard cold reality is that you cannot "own" an idea.
         | 
         | The above comment is true about the properties of information,
         | as explained via the lens of economics. [1]
         | 
         | However, one ignores ownership as defined by various systems
         | (including the rule of law and social conventions) at one's own
         | peril. Such systems can also present a "hard cold reality" that
         | can bankrupt or ostracize you.
         | 
         | [1] Don't let the apparent confidence and technicality of the
         | language of economists fool you. Economics isn't the only game
         | in town. There are other ways to model and frame the world.
         | 
         | [2] Dangling footnote warning. I think it is instructive to
         | recognize that the field of economics has historically shown a
         | kind of inferiority complex w.r.t. physics. Some economists
         | ascribe to the level of rigor found in physics and that is well
         | and good, but perhaps that effort should not be taken too
         | seriously nor too far, since economics as a field operates at a
         | different level. IMO, it would be wise for more in the field to
         | eat a slice of humble pie.
         | 
         | [3] Ibid. It is well-known that economists can be "hired guns"
         | used to "prove" a wide variety of things, many of which are
         | subjective. My point: you can hire an economist to shore up
         | one's political proposals. Is the same true of physicists?
         | Hopefully not to the same degree. Perhaps there are some cases
         | of hucksterism, but nothing like the history of economists-
         | wagging-the-dog! At some point, the electron tunnels or it does
         | not.
        
           | meowkit wrote:
           | There are other games in town.
           | 
           | But whatever game gives the most predictive power is going to
           | win.
        
             | xpe wrote:
             | There is no need to frame this as "winning versus losing"
             | regarding the many models that we draw upon.
             | 
             | Even when talking about various kinds of scientific and
             | engineering fields, predictive power isn't the only
             | criteria, much less the best. Sometimes the simpler, less
             | accurate models work well enough with less informational
             | and computational cost.
             | 
             | Even if we focus on prediction (as opposed to say
             | statistical inference), often people want some kind of
             | hybrid. Perhaps a blend of satisficing with limited
             | information, scoped action spaces, and bounded computation;
             | i.e. good enough given the information we have to make the
             | decisions we can actuate with some computational budget.
        
             | sfifs wrote:
             | By that metric, various economic schools have been
             | hilariously inept and would get classified not dissimilar
             | to various schools of religious theology with their own
             | dogmas. It's only in the last 15 years or so that some
             | focus on empiricism and explaining reality rather than
             | building theoretical castles in the air is coming about and
             | is still far from mainstream.
        
         | xpe wrote:
         | > ... you cannot "own" an idea.
         | 
         | Let's talk about ownership in a broader sense. In practice, one
         | cannot effectively own (retain possession of) something without
         | some combination of physical capability or coercion (or threat
         | of coercion). Meaning: maintaining ownership of anything
         | (physical or otherwise) often depends on the rule of law.
        
           | thomastjeffery wrote:
           | Then let's use a more precise term that is also present in
           | law: monopoly.
           | 
           | You can't monopolize an idea.
           | 
           | Copyright law is a prescription, not a description. Copyright
           | law _demands_ that everyone play along with the lie that is
           | intellectual monopoly. The effectiveness of that demand
           | depends on how well it can be enforced.
           | 
           | Playing pretend during the age of the printing press may have
           | been easy enough to coordinate, but it's practically
           | impossible here in the digital age.
           | 
           | If we were to increase enforcement to the point of
           | effectiveness, then what society would be left to
           | participate? Surely not a society I am keen to be a part of.
        
         | juunpp wrote:
         | You don't copyright ideas, you copyright works. And these
         | artists' productions are works, not abstract ideas, with
         | copyrights, and they are being violated. This is simple law.
         | Why do people have such a hard time with this? Are you the one
         | training the models and you need to find a cognitive escape out
         | of the illegality and wrong-doing of your activities?
        
           | theragra wrote:
           | If it were true, then we wouldn't have that great difference
           | in opinions on this topic.
        
             | juunpp wrote:
             | That GP is utterly confused about copyright law is not an
             | opinion.
        
               | sircastor wrote:
               | The United States Supreme Court rulings are supported
               | literally by opinions of the justices.
        
           | sebzim4500 wrote:
           | That may be the law, although we are probably years of legal
           | proceedings away from finding out.
           | 
           | It obviously is not "simple law".
        
           | rlt wrote:
           | It's not obvious to me that using a copyrighted image to
           | train a model is copyright infringement. It's certainly not
           | copyright infringement when used to train a human who may end
           | up creating works that are influenced by (but not copies of)
           | the original works.
           | 
           | Now, if the original copyrighted work can be extracted or
           | reproduced from the model, that's obviously copyright
           | infringement.
           | 
           | OpenAI etc should ensure they don't do that.
        
             | Andrex wrote:
             | Reproduced to what fidelity? 100%?
             | 
             | If OpenAI's output reproduces a copyrighted image with one
             | pixel changed, is that valid in your view? Where does the
             | line end?
             | 
             | Copyrighted material should never be used for nonacademic
             | language models. "Garbage in, garbage out." All results are
             | tainted.
             | 
             | "But being forced to use non-copyrighted works will only
             | slow things down!"
             | 
             | Maybe that's a good thing, too. Copyright is something
             | every industry has to accept and deal with -- LLMs don't
             | get a "cool tech, do whatever" get-out-of-jail free card.
        
               | pawelmurias wrote:
               | Copyright is decided by the courts, it's a legal thing
               | not some a biological. If the courts decide it's legal it
               | will be.
        
               | Andrex wrote:
               | I'm totally down for the courts handling this
               | AI/copyright mess, but I don't think technologists are
               | going to like the results.
               | 
               | By virtue of the fact that it _is_ "fuzzy" and open to
               | interpretation, we're going to see lawsuits, the
               | resulting chilling effects of those lawsuits will blunt
               | US tech firms from the practice of ingesting large
               | amounts of copywritten material without a second thought.
               | US tech firms will be giving it a second, third, fourth,
               | etc. thought once the lawsuits start.
               | 
               | It's gonna be like submarine patents on steroids.
               | 
               | Like I said, I'm down for letting the courts decide. But
               | AI supporters should probably avoid kicking the hornets'
               | nests regarding copyright.
        
               | rlt wrote:
               | > Reproduced to what fidelity? 100%?
               | 
               | Whatever the standard is for humans doing the exact same
               | thing.
        
           | huytersd wrote:
           | Nothing is being reproduced. Just the ideas being reused.
        
           | sircastor wrote:
           | >This is simple law. Why do people have such a hard time with
           | this?
           | 
           | Because this isn't simple law. It feels like simple
           | infringement, but there's no actual copying going on. You
           | can't open up the database and find a given duplicate of a
           | work. Instead you have some abstraction of what it takes to
           | get to a given work.
           | 
           | Also it's important to point out that nothing in the law is
           | sure. A good lawyer, a sympathetic judge, a
           | bored/interested/contrarian juror, etc can render "settled
           | law" unsettled in an instant. The law is not a set of board
           | game rules.
        
             | flkiwi wrote:
             | If the AI were a human and that human made an image that
             | copied substantial elements of another human's creative
             | work after a careful review of the original creator's work,
             | even if it was not an original copy and no archival copy
             | was stored somewhere in the second creator's creative
             | space, I would be concerned about copyright infringement
             | exposure if I were the second (copying) creator.
             | 
             | I'm open to the idea that copyright law might need to
             | change, but it doesn't seem controversial to note that
             | scraping actual creative works to extract elements for an
             | algorithm to generate new works crosses a number of
             | worrying lines.
        
           | SamPatt wrote:
           | Illegality and wrongdoing are completely distinct categories.
           | 
           | I'm not convinced that most copyright infringements are
           | immoral regardless of their legal status.
           | 
           | If you post your images for the world to see, and someone
           | uses that image, you are not harmed.
           | 
           | The idea that the world owes you something after you
           | deliberately shared it with others seems bizarre.
        
             | brookst wrote:
             | Imagine if every book or advertisement or public
             | conversation you overheard led to future claims that you
             | had unethically learned from public information. It's such
             | a weird worldview.
             | 
             | (BTW I forbid you from using my comment here in your future
             | reasoning)
        
           | fiddlerwoaroof wrote:
           | > This is simple law.
           | 
           | "One may well ask: 'How can you advocate breaking some laws
           | and obeying others?' The answer lies in the fact that there
           | are two types of laws: just and unjust. I would be the first
           | to advocate obeying just laws. One has not only a legal but a
           | moral responsibility to obey just laws. Conversely, one has a
           | moral responsibility to disobey unjust laws. I would agree
           | with St. Augustine that 'an unjust law is no law at all.'"
        
       | gfodor wrote:
       | Huge market for snake oil here. There is no way that such tools
       | will ever win, given the requirements the art remain viewable to
       | human perception, so even if you made something that worked
       | (which this sounds like it doesn't) from first principles it will
       | be worked around immediately.
       | 
       | The only real way for artists or anyone really to try to hold
       | back models from training on human outputs is through the law,
       | ie, leveraging state backed violence to deter the things they
       | don't want. This too won't be a perfect solution, if anything it
       | will just put more incentives for people to develop decentralized
       | training networks that "launder" the copyright violations that
       | would allow for prosecutions.
       | 
       | All in all it's a losing battle at a minimum and a stupid battle
       | at worst. We know these models can be created easily and so they
       | will, eventually, since you can't prevent a computer from
       | observing images you want humans to be able to observe freely.
        
         | AJ007 wrote:
         | The level of claims accompanied by enthusiastic reception from
         | a technically illiterate audience make it sound, smell, and
         | sound like snake oil without much deep investigation.
         | 
         | There is another alternative to the law. Provide your art for
         | private viewing only, and ensure your in person audience does
         | not bring recording devices with them. That may sound absurd,
         | but it's a common practice during activities like having sex.
        
           | gfodor wrote:
           | True I can imagine that kind of thing becoming popular.
        
         | thfuran wrote:
         | >There is no way that such tools will ever win, given the
         | requirements the art remain viewable to human perception
         | 
         | On the other hand, the adversarial environment might push
         | models towards a representation more aligned with human
         | perception, which is neat.
        
       | minimaxir wrote:
       | A few months ago I made a proof-of-concept on how finetuning
       | Stable Diffusion XL on known bad/incoherent images can actually
       | allow it to output "better" images if those images are used as a
       | negative prompt, i.e. specifying a high-dimensional area of the
       | latent space that model generation should stay away from:
       | https://news.ycombinator.com/item?id=37211519
       | 
       | There's a nonzero chance that encouraging the creation of a large
       | dataset of known tampered data can ironically _improve_
       | generative AI art models by allowing the model to recognize
       | tampered data and allow the training process to work around it.
        
       | k__ wrote:
       | What are LLMs that was trained with public domain content only?
       | 
       | I would believe there is enough content out there to get
       | reasonably good results.
        
       | squidbeak wrote:
       | I really don't understand the anxiety of artists towards AI - as
       | if creatives haven't always borrowed and imitated. Every leading
       | artist has had acolytes, and while it's true no artist ever had
       | an acolyte as prodigiously productive as AI will be, I don't see
       | anything different between a young artist looking to Picasso for
       | cues and Stable Diffusion or DALL-E doing the same. Styles and
       | methods haven't ever been subject to copyright - and art would
       | die the moment that changed.
       | 
       | The only explanation I can find for this backlash is that artists
       | are actually worried just like the rest of us that pretty soon AI
       | will produce higher quality more inventive work faster and more
       | imaginatively than they can - which is very natural, but not a
       | reason to inhibit an AI's creative education.
        
         | beepbooptheory wrote:
         | This has been litigated over and over again, and there have
         | been plenty of good points made and concerns raised over it by
         | those who it actually affects. It seems a little bit
         | disingenuous (especially in this forum) to say that that
         | conclusion is the "only explanation" you can come up with. And
         | just to avoid prompting you too much: trust me, we all know or
         | can guess why you think AI art is a good thing regardless of
         | any concerns one might bring up.
        
         | jwells89 wrote:
         | Imitation isn't the problem so much as it is that ML generated
         | images are composed of a mush of the images it was trained on.
         | A human artist can abstract the concepts underpinning a style
         | and mimic it by drawing all-new lineart, coloration, shading,
         | composition, etc, while the ML model has to lean on blending
         | training imagery together.
         | 
         | Furthermore there's a sort of unavoidable "jitter" in human-
         | produced art that varies between individuals that stems from
         | vastly different ways of thinking, perception of the world,
         | mental abstraction processes, life experiences, etc. This is
         | why artists who start out imitating other artists almost always
         | develop their imitations into a style all their own -- the
         | imitations were already appreciably different from the original
         | due to the aforementioned biases and those distinctions only
         | grow with time and experimentation.
         | 
         | There would be greatly reduced moral controversy surrounding ML
         | models if they lacked that mincemeat/pink slime aspect.
        
         | 23B1 wrote:
         | "I really don't understand the anxiety of slaves towards free
         | labor, as if humans haven't always had to work to survive"
        
           | dang wrote:
           | Could you please stop posting unsubstantive comments and
           | flamebait? You've unfortunately been doing it repeatedly.
           | It's not what this site is for, and destroys what it is for.
           | 
           | If you wouldn't mind reviewing
           | https://news.ycombinator.com/newsguidelines.html and taking
           | the intended spirit of the site more to heart, we'd be
           | grateful.
        
       | will5421 wrote:
       | I think the artists need to agree to stop making art altogether.
       | That ought to get people's attention. Then the AI people might
       | (be socially pressured or legally forced to) put their tools
       | away.
        
       | Zetobal wrote:
       | Well, at least for sdxl it's not working neither in LoRa nor
       | dreambooth finetunes.
        
       | chris-orgmenta wrote:
       | I want _progressive fees_ on copyright /IP/patent usage, and
       | worldwide gov cooperation/legislation (and perhaps even worldwide
       | ability to use works without obtaining initial permission,
       | although let's not go into that outlandish stuff)
       | 
       | I want a scaling license fee to apply (e.g. % pegged to revenue.
       | This still has an indirect problem with different industries
       | having different profit margins, but still seems the fairest).
       | 
       | And I want the world (or EU, then others to follow suit) to
       | slowly reduce copyright to 0 years* after artists death if owned
       | by a person, and 20-30 years max if owned by a corporation.
       | 
       | And I want the penalties for not declaring usage** / not paying
       | fees, to be incredibly high for corporations... 50% gross
       | (harder) / net (easier) profit margin for the year? Something
       | that isn't a slap on the wrist and can't be wriggled out of
       | _quite_ so easily, and is actually an incentive not to steal in
       | the first place.)
       | 
       | [*]or whatever society deems appropriate.
       | 
       | [**]Until auto-detection (for better or worse) gets good enough.
       | 
       | IMO that would allow personal use, encourages new entrants to
       | market, encourages innovation, incentivises better behaviour from
       | OpenAI et al.
        
       | whywhywhywhy wrote:
       | Why are there no examples?
        
       | arisAlexis wrote:
       | Wouldn't this be applicable to text too?
        
       | matteoraso wrote:
       | Too little, too late. There's already very large high quality
       | datasets to train AI art generators.
        
       | eigenvalue wrote:
       | This seems like a pretty pointless "arms race" or "cat and mouse
       | game". People who want to train generative image models and who
       | don't care about what artists think about it at all can just do
       | some basic post-processing on the images that is just enough to
       | destroy the very carefully tuned changes this Nightshade
       | algorithm makes. Something like resampling it to slightly lower
       | resolution and then using another super-resolution model on it to
       | upsample it again would probably be able to destroy these subtle
       | tweaks without making a big difference to a human observer.
       | 
       | In the future, my guess is that courts will generally be on the
       | side of artists because of societal pressures, and artists will
       | be able to challenge any image they find and have it sent to yet
       | another ML model that can quickly adjudicate whether the
       | generated image is "too similar" to the artist's style (which
       | would also need to be dissimilar enough from everyone else's
       | style to give a reasonable legal claim in the first place).
       | 
       | Or maybe artists will just give up on trying to monetize the
       | images themselves and focus only on creating physical artifacts,
       | similar to how independent musicians make most of their money
       | nowadays from touring and selling merchandise at shows (plus
       | Patreon). Who knows? It's hard to predict the future when there
       | are such huge fundamental changes that happen so quickly!
        
         | hackernewds wrote:
         | the point is you could circumvent one nightshade, but as long
         | as the cat and mouse game continues there can be more
        
       ___________________________________________________________________
       (page generated 2024-01-20 23:00 UTC)