[HN Gopher] Nightshade: An offensive tool for artists against AI...
___________________________________________________________________
Nightshade: An offensive tool for artists against AI art generators
Author : ink404
Score : 512 points
Date : 2024-01-19 17:42 UTC (2 days ago)
(HTM) web link (nightshade.cs.uchicago.edu)
(TXT) w3m dump (nightshade.cs.uchicago.edu)
| ink404 wrote:
| Paper is here: https://arxiv.org/abs/2310.13828
| KingOfCoders wrote:
| Artists sitting in art school looking at other artists pictures
| to learn how to paint in different styles defend against AI
| learning from their pictures how to paint in different styles.
| visarga wrote:
| AI is just a tool in someone's hand, there's a human who
| intends something
| ta8645 wrote:
| If that's true, then it should be fine for that human to
| paint with the brush of his AI tool. Why should that human
| artist be restricted in the types of tools he uses to create
| his artwork?
| thfuran wrote:
| Should I be restricted in using copy paste as my tool for
| creating art?
| ta8645 wrote:
| No you should not. You should be able to use any tool you
| want.
|
| If you produce a work that is too much of a copy from the
| original, you might be liable to a copyright claim.. but
| the act of copy and paste should not be prohibited in the
| generation of something new.
|
| This is done all the time by artists.. who perhaps create
| an artwork by copy and pasting advertisements out of a
| womans magazine to create an image of a womans face made
| only of those clipping. Making a statement about identity
| creation from corporate media... we should not put
| restrictions on such art work.
|
| Here's just one example of an artist using copy-n-paste
| of content they don't own to create something new:
|
| https://torontolife.com/culture/this-artist-creates-one-
| of-a...
| visarga wrote:
| like collage art?
| NoraCodes wrote:
| This is a tired argument; whether or not the diffusion models
| are "learning", they are a tool of capital to fuck over human
| artists, and should be resisted for that reason alone.
| persnickety wrote:
| As a representative of a lot of things but hardly any capital
| who uses diffusion models to get something I would otherwise
| not pay a human artist for anyway, I testify that, the models
| are not exclusively what you describe them to be.
|
| I do not support indiscriminate banning of anything and
| everything that can potentially be used to fuck someone over.
| NoraCodes wrote:
| I did not say they were _exclusively_ that; I said they
| _were_ that.
|
| Once we as a society have implemented a good way for the
| artists whose work powers these machines to survive, you
| can feel good about using them. Until then, frankly, you're
| doing something immoral by paying to use them.
| akx wrote:
| What if I run Stable Diffusion locally without paying
| anyone anything? Is it less immoral?
| NoraCodes wrote:
| Marginally, yeah, since you're not supporting the
| development of more capable labor-saving devices in this
| category.
|
| I'm still not a fan, though.
| riversflow wrote:
| How is empowering others to create not a moral good?
| NoraCodes wrote:
| It is! This method of doing so has overwhelming negative
| externalities, though. I'd expect anyone who actually
| gave a shit about AI empowering people to create to spend
| just as much effort pushing legislation so the displaced
| artists don't starve on the street as a result.
| Filligree wrote:
| Who pays to use generators? The open ones are way more
| capable and interesting, generally.
| wokwokwok wrote:
| Is that really your concern?
|
| Whether you pay for it?
|
| Let's put it this way: paying for or not paying for
| stolen goods. Does it make any difference?
|
| Why is that remotely relevant?
|
| You want to argue "are the good stolen?" Sure. That's a
| discussion we can have.
|
| Did you pay for them or not? Who cares?
| thfuran wrote:
| That's a terrible analogy. Until the scrapers start
| deleting all other copies of what what they're scraping,
| "stealing" the art in a traditional sense, there's no
| harm done in the process of training the network. Any
| harm done comes after that.
| jsheard wrote:
| Isn't high-quality open image generation almost entirely
| dependent on Stability releasing their foundational
| models for free, at great expense to them?
|
| That's not something you'll be able to rely on long-term,
| there won't always be a firehose of venture capital money
| to subsidise that kind of charity.
| Filligree wrote:
| The cost of training them is going down, though. Given
| the existence of models like Pixart, I don't think we'll
| stay dependent on corporate charity for long.
| Levitz wrote:
| By this logic we ought to start lynching artists, why
| they didn't care about all of those who lost their jobs
| making pigments, canvasses, pencils, brushes etc etc
| hirsin wrote:
| Artists pay those people and make their jobs needed. Same
| as the person above claiming Duchamp didn't negotiate
| with the ceramics makers - yes, they absolutely did and
| do pay their suppliers. Artists aren't smash and grabbing
| their local Blick.
|
| AI pays no artist.
| CaptainFever wrote:
| Not digital artists, though.
| Levitz wrote:
| >Artists pay those people and make their jobs needed.
|
| Enter factories, now you gather all the knowledge from
| those who made the product, automate it and leave them
| without jobs.
| NoraCodes wrote:
| Why on earth would you choose the word 'lynch' here? At
| worst I'm suggesting mild disapproval and regulation, not
| historically racist mob violence.
| SonicSoul wrote:
| great comment!
|
| imagine being a photographer that takes decades to perfect
| their craft. sure another student can study and mimic your
| style. but it's still different than some computer model
| "ingesting" vast amount of photos and vomiting something
| similar for $5.99 in aws cpu cost so that some prompt jockey
| can call themselves an AI artist and make money off of other
| peoples talent.
|
| i get that this is cynical and does not encompass all ai art,
| but why not let computers develop their own style wihout
| ingesting human art? that's when it would actually be AI art
| wincy wrote:
| Like 99.9% of the art the common people care about is Darth
| Vader and Taylor Swift and other pop culture stuff like
| that.
|
| These people literally don't care what your definition of
| what is and isn't art is, or how it's made, they just want
| a lock screen wallpaper of themselves fighting against
| Thanos on top of a volcano.
|
| The argument of "what is art" has been an academic
| conversation largely ignored by the people actually
| consuming the art for hundreds of years. Photography was
| just pop culture trash, comics were pop culture trash,
| stick figure web comics were pop culture trash. Today's pop
| culture trash is the "prompt jockey".
|
| I make probably 5-10 pictures every day over the course of
| maybe 20 minutes as jokes on Teams because we have Bing
| Chat Enterprise. My coworkers seem to enjoy it. Nobody
| cares that it's generated. I'm also not trying to be an
| "artist" whatever that means. It just is, and it's fun. I
| wasn't gonna hire an artist to draw me pictures to shitpost
| to my coworkers. It's instead unlocked a new fun way to
| communicate.
| SonicSoul wrote:
| not entirely sure what your point is, but i think you are
| saying that art is just a commodity we use for cheap
| entertainment so it's ok for computers to do the same?
|
| in the context of what i was saying the definition of
| what is art can be summed up as anything made by humans.
| i have no problem when its used in memes and being open
| sourced etc.. the issue i have is when a human invests
| real time into it and then its taken and regurgitated
| without their permission. do you see that distinction?
| johnnyanmac wrote:
| I mean, I don't think many care about your personal use
| of art. You can take copyright images and shit post and
| Disney won't go suing your workplace.
|
| But many big players do want to use this commercially and
| that's where a lot of these lines start to form. No
| matter how lawsuits go you will probably still be able to
| find some LLM to make Thanos fighting a volcano. It's
| just a matter of how/if companies can profit from it.
| dartharva wrote:
| This response should be gilded
| Levitz wrote:
| Because that's not what happens, ever. You wouldn't ask a
| human to have their style of photographing when they don't
| know what a photograph even looks like.
| bongodongobob wrote:
| That's a funny argument because artists lost their shit
| over photography too. Now anyone can make a portrait!
| Photography will kill art!
|
| Art is the biggest gate kept industry there is and I detest
| artists who believe only they are the chosen one.
|
| Art is human expression. We all have a right to create what
| we want with whatever tools we want. They can adapt or be
| left behind. No sympathy from me.
| witherk wrote:
| "Cameras are a tool of captial to fuck over human portrait
| artists"
|
| It's funny that these people use the langauge of communism,
| but apparently see artwork as purley an economic activity.
| NoraCodes wrote:
| That's an intentional misinterpretation, I think. I mention
| art as an economic activity because it's primarily
| professional artists that are harmed by the widespread
| adoption of this technology.
| indigo0086 wrote:
| They tried to use the labor theory early on by claiming,
| "real art takes hard work and time as opposed to the
| miniscule cpu hours computers use to make 'AI art". The
| worst thing AI brings to the table is amplifying these
| types of sentiments to control industry in their favor
| where they would otherwise be unheard and relegated to
| Instagram likes
| redwall_hp wrote:
| > It's funny that these people use the langauge of
| communism, but apparently see artwork as purley an economic
| activity.
|
| You hit the nail on the head. Copyright is, by its very
| nature, a "tool of capital." It's a means of creating new
| artificial property fiefdoms for a select few capital
| holders to lord over, while taking rights from anyone else
| who wants to engage in the practice of making art.
|
| Everyone has their right to expression infringed upon, all
| so the 1% of artists can perpetually make money on things,
| which are ultimately sold to corporations that only pay
| them pennies on the dollar anyway.
|
| You, as an indie hip hop or house musician supported by a
| day job, can't sample and chop some vocals or use a slice
| of a chord played in a song (as were common in the 80s and
| 90s) for a completely new work, but apparently the world is
| such a better place because Taylor Swift is a
| multimillionaire and Disney can milk the maximum value from
| space and superhero films.
|
| I'd rather live in a world where anyone is free to make
| whatever art they want, even if everyone has to have a day
| job.
| skydhash wrote:
| > It's a means of creating new artificial property
| fiefdoms for a select few capital holders to lord over,
| while taking rights from anyone else who wants to engage
| in the practice of making art.
|
| I doubt even Disney sue people who want to make fan art.
| But if you want to sell said art or distribute it, they
| will.
| dahart wrote:
| > fiefdoms for a select few
|
| What do you mean? Copyright protects all creative works,
| and all authors of those creative works. That some have
| greater means to enforce was always true, and copyright
| doesn't cause that, it (imperfectly) helps mitigate it.
| What copyright does is actually prevent them from
| stealing work from independent artists en masse, and
| force them to at least hire and pay some artists.
|
| > I'd rather live in a world where anyone is free to make
| whatever art they want, even if everyone has to have a
| day job.
|
| You're suggesting abolish Copyright and/or the Berne
| Convention? Yeah the problem with this thinking is that
| then the big publishers are completely free to steal
| everyone's work without paying for it. The very thing
| you're complaining about would only get way _way_ worse
| if we allowed anyone to "freely" make whatever art they
| want by taking it from others. "Anyone" means Disney too,
| and Disney is more motivated than you.
|
| > You, as an indie hip hop or house musician supported by
| a day job, can't sample and chop some vocals or use a
| slice of a chord played in a song... for a completely new
| work
|
| Hehe, if you sample, you are by definition not making a
| completely new work. But this is a terrible argument
| since sampling in music is widespread and has sometimes
| been successfully defended in court. DJs are the best
| example of independent artists who need protection you
| can think of?
| educaysean wrote:
| As a human artist I don't feel the same as you, and I somehow
| doubt that you care all that much about what we think
| anyways. You already made up your mind about the tech, so
| don't feel the need to protect us from "a tool of capital
| [sic]" to fortify your argument.
| NoraCodes wrote:
| My opinion is based on my interactions with my friends who
| are artists. I admit freely to caring less about what
| people I don't know say, in the absence of additional
| evidence.
| tester457 wrote:
| Among working human artists your opinion is in the
| minority. Most professionals are not a fan of this.
| bongodongobob wrote:
| Yeah because their wage is inflated. Photographers were
| mad about digital cameras too. Womp womp.
| tester457 wrote:
| Inflated is not an apt descriptor of artist wages, those
| are known to be low.
| bongodongobob wrote:
| If you're independent selling paintings, sure. Designing
| packaging or something commercial? 4 hours of work a week
| for nearly 6 figures. I know a couple graphic designers
| and they don't do shit for what they're paid.
| johnnyanmac wrote:
| You should probably tell the other millions of artists
| busting out 60+ hour workweek in industry for half that
| price where these jobs are. That could solve this problem
| overnight.
| bongodongobob wrote:
| Manufacturing/packaging.
| mlrtime wrote:
| And how are they different than the millions of factory
| workers on a line?
| MacsHeadroom wrote:
| And horse wages (some oats) were low when the car was
| invented. Yet they were still inflated. There used to be
| more horses than humans in this country. Couldn't even
| earn their keep when the Ford Model T came along.
| NoraCodes wrote:
| You're comparing human artists to horses? Seriously?
| CatWChainsaw wrote:
| It's not surprising, they prefer machines to people and
| call humans "stochastic parrots". The more humans are
| compared to animals, the more justified they feel writing
| them off as an expense and doing away with them.
| big_whack wrote:
| Do you make your living as an artist?
| CatWChainsaw wrote:
| You've done the same thing in this comment so what makes
| you think you're the superior specimen?
| matheusmoreira wrote:
| > they are a tool of capital to fuck over human artists
|
| So are the copyright and intellectual property laws that
| artists rely on. From my perspective, _you_ are the capital
| and _I_ am the one being fucked. So are you ready to abolish
| all that?
| CaptainFever wrote:
| Right. This new outrage is just the copyright owners
| realising that their power is not safe. Where is the
| outrage when self checkouts happened?
| matheusmoreira wrote:
| Copyright owners indeed. That's what these artists are.
| They're copyright owners. Monopolists. They _are_ the
| capital. Capitalism is all about owning property.
| Copyright is _intellectual_ property. Literally imaginary
| property. Ownership of information, of bits, of
| _numbers_. These artists are the literal epitome of
| capitalism. They enjoy state granted monopolies that last
| multiple human lifetimes. We 'll be long dead before
| their works enter the public domain. They _want_ it to be
| this way. They _want_ eternal rent seeking for themselves
| and their descendants. At least one artist has told me
| exactly that in discussions here on HN. They think it 's
| fair.
|
| They are the quintessential representation of capital.
| And they come here to ask us to "resist" the other forms
| of capital on principle.
|
| I'm sorry but... No. I'm gonna resist them instead. It's
| my sincere hope that this AI technology hammers in the
| last nails on the coffin of copyright and intellectual
| property as a whole. I want all the models to leak so
| that it becomes literally impossible to get rid of this
| technology no matter how much they hate it. I want it to
| progress so that we can run it on our own machines, so
| that it'll be so ubiquitous it can't be censored or
| banned no matter how much they lobby for it.
| NoraCodes wrote:
| > It's my sincere hope that this AI technology hammers in
| the last nails on the coffin of copyright and
| intellectual property as a whole.
|
| If it does, I will give you one thousand United States
| dollars, and you can quote me on thst whenever you like.
|
| More likely, big companies will retain control as they
| always have (via expensive lawyers), and individual
| artists will get screwed.
| CatWChainsaw wrote:
| Claiming that a single artist of modest means whose work
| was used for model training explicitly against their
| wishes.... is exactly the same as the multibillion
| corporation doing the training and profiting off it at
| fleet-of-megayachts scale.... certainly is a take, I'll
| give you that. If you ever quote "first they came" in a
| self-pitying context, remember that you deserve it.
| dartharva wrote:
| Exactly. Artists should drop the pretentious philosophical
| bumbling and accept what this is, a fight for their
| livelihood. Which is, in every sense, completely warranted
| and good.
|
| Putting blame on the technology and trying to limit public
| access to software will not go anywhere. Your fight for
| regulation needs to be with publishers and producers, not
| with the teen trying to make a cool new wallpaper or the
| office-man trying to make an aesthetic powerpoint
| presentation.
| Snow_Falls wrote:
| These AIs are not people. They do not learn.
| jfdbcv wrote:
| Define learn.
| dudeinjapan wrote:
| Define people.
| weregiraffe wrote:
| Define "define".
| ozten wrote:
| The memetic weapons humans unleashed on other humans at art
| school to deter copying are brutal. Just wait until critique.
| dist-epoch wrote:
| "Sorry, this is not art, is AI generated trash."
| HKH2 wrote:
| Who cares? With AI, you don't need art school. AI is making
| humanities redundant, and people are too proud to admit it.
|
| I can't believe how many people are not in awe of the
| possibilities of AI art, so it's great to see AI disturbing
| the cynics until they learn. Not everything is political,
| but I'll let them have this one.
| squidsoup wrote:
| > With AI, you don't need art school. AI is making
| humanities redundant, and people are too proud to admit
| it.
|
| If you think this is true, you've never understood art.
| Art is a human endeavour fundamentally. What ML produces
| is not art.
| HKH2 wrote:
| So what? You can gatekeep as much as you like, but
| whatever humans relate to as art is art.
| bluejekyll wrote:
| My issue with this line of argument is that it's
| anthropomorphizing machines. It's fine to compare how humans do
| a task with how a machine does a task, but in the end they are
| very different from each other, organic vs hardware and
| software logic.
|
| First, you need to prove that generative AI works fundamentally
| the same way as humans at the task of learning. Next you have
| to prove that it recalls information in the same way as humans.
| I don't think anyone would say these are things that we can
| prove, let alone believe they do. So what we get is comments
| like they are similar.
|
| What this means, is these systems will fall into different
| categories of law around copyright and free-use. What's clear
| is that there are people who believe that they are harmed by
| the use of their work in training these systems and it
| reproducing that work in some manner later on (the degree to
| which that single work or the corpus of their work influences
| that final product is an interesting question). If your terms
| of use/copyright/license says "you may not train on this data",
| then should that be protected in law? If a system like
| nightshade can effectively influence a training model enough to
| make it clear that something protected was used in its
| training, is that enough proof that the legal protections were
| broken?
| thfuran wrote:
| >First, you need to prove that generative AI works
| fundamentally the same way as humans at the task of learning.
| Next you have to prove that it recalls information in the
| same way as humans.
|
| No, you don't need to prove any of those things. They're
| irrelevant. You'd need to prove that the AI is itself morally
| (or, depending on the nature of the dispute, legally)
| equivalent to a human and therefore deserving of (or entitled
| to) the same rights and protections as a human. Since it is
| pretty indisputably the case that software is not currently
| legally equivalent to a human, you're stuck with the moral
| argument that it ought to be, but I think we're very far from
| a point where that position is warranted or likely to see
| much support.
| kelseyfrog wrote:
| You don't even need to do that. Art is an act of
| ontological framing.
|
| Duchamp didn't need negotiate with the ceramic makers to
| make the Fountain into art.
| stale2002 wrote:
| > ou'd need to prove that the AI is itself morally (or,
| depending on the nature of the dispute, legally) equivalent
| to a human and therefore deserving of
|
| No you don't.
|
| A human using a computer to make art doesn't automatically
| lose their fair use rights as a human.
|
| > indisputably the case that software is not currently
| legally equivalent to a human
|
| Fortunately it is the human who uses the computer who has
| the legal rights to use computers in their existing process
| of fair use.
|
| Human brains or giving rights to computers has absolutely
| nothing to do with the rights of human to use a camera, use
| photoshop, or even use AI, on a computer.
| sebzim4500 wrote:
| Few people are claiming that the AI itself has the same
| rights as a human. They are arguing that a human with an AI
| has the same rights as a human who doesn't have an AI.
| hn_acker wrote:
| > They are arguing that a human with an AI has the same
| rights as a human who doesn't have an AI.
|
| This is the analogy I want people against AI use to
| understand and never forget, even if they reject the
| underlying premise - that should laws treat a human who
| uses AI for a certain purpose identically to a human who
| uses nothing or a non-AI tool for the same purpose.
|
| > Few people are claiming that the AI itself has the same
| rights as a human.
|
| I think that's the case as well. However, a lot of
| commenters on this post are claiming that an AI is
| similar in behavior to a human, and trying to use the
| behavior analogy as the basis for justifying AI training
| (on legally-obtained copies of copyrighted works), with
| the assumption that justifying training justifies use. My
| personal flow of logic is the reverse: human who uses AI
| should be legally the same as human who uses a non-AI
| tool, so AI use is justified, so training on legally-
| obtained copies of copyrighted works is justified.
|
| I want people in favor of AI use particularly to
| understand your human-with-AI-to-human-without-AI analogy
| (for short, the tool analogy) and to avoid machine-
| learning-to-human-learning analogies (for short, behavior
| analogies). The tool analogy is based on a belief about
| how people should treat each other, and contends with
| opposing beliefs about how people should treat each
| other. An behavior analogy must contend with both 1.
| opposing beliefs about how people should treat each other
| and 2. contradictions from reality about how similar
| machine learning is to brain learning. (Admittedly, both
| the tool analogy and the behavior analogy must contend
| with the net harm AI use is having and will have on the
| cultural and economic significance of human-made creative
| works.)
| huytersd wrote:
| Why do you have to prove that? There is no replication
| (except in very rare cases), how someone draws a line should
| not be copyrightable.
| pfist wrote:
| Therein lies the crux of the issue: AI is not "someone". We
| need to approach this without anthropomorphizing the AI.
| Almondsetat wrote:
| You are right, AI is nothing but a tool akin to a pen or
| a brush.
|
| If you draw Mickey Mouse with a pencil and you publish
| (and sell) the drawing who is getting the blame? Is the
| pencil infringing the copyright? No, it's you.
|
| Same with AI. There is nothijg wrong with using
| copyrighted works to train an algorithm, but if you
| generate an image and it contains copyrighted materials
| you are getting sued.
| ufocia wrote:
| But there is. You are arguably making unauthorized copies
| to train.
| Almondsetat wrote:
| Unauthorized copies? If the images are published on the
| internet how is it downloading them "unauthorized"?
| __loam wrote:
| Publicly available doesn't mean you have a license to do
| whatever you like with the image. If I download an image
| and re-upload it to my own art station or sell prints of
| it, that is something I can physically do because the
| image is public, but I'm absolutely violating copyright.
| Almondsetat wrote:
| That's not an unautharized copy, it's unauthorized
| distribution. By the same metric me seeing the image and
| copying it by hand is also unauthorized copy (or
| reproduction is you will)
| xigoi wrote:
| IANAL, but I'm pretty sure copying an image by hand is
| copyright violation.
| Almondsetat wrote:
| So you cannot train your drawing skills by copying other
| people's artworks?
| xigoi wrote:
| You can do it in private, but you can't distribute the
| resulting image, let alone sell it.
| Almondsetat wrote:
| Then I don't really understand your original reply.
| Simply copying a publicly available image doesn't
| infringe anything (unless it was supposed to be
| private/secret). Doing stuff with that image in private
| still doesn't constitute infringement. Distribution does,
| but that wasn't the topic at hand
| xigoi wrote:
| You can train a neural network in private too and nobody
| will have a problem with that. The topic of discussion is
| commercial AI.
| zerocrates wrote:
| The most basic right protected by copyright is the right
| to make copies.
|
| Merely making a copy can definitely be infringement.
| "Copies" made in the computing context even simply
| between disk and RAM have been held to be infringement in
| some cases.
|
| Fair use is the big question mark here, as it acts to
| allow various kinds of harmless/acceptable/desirable
| copying. For AI, it's particularly relevant that there's
| a factor of the "transformative" nature of a use that
| weighs in favor of fair use.
| shkkmo wrote:
| The answer is "it depends". Distribution is not a hard
| requirement for copyright violation. It can significantly
| impact monetary judgements.
|
| That said, there is also an inherent right to copy
| material that is published online in certain
| circumstances. Indeed, the physical act of displaying an
| image in a browser involves making copies of that image
| in cache, memory, etc.
| huytersd wrote:
| If you are viewing the image on your browser on a
| website, you are making a local copy. That's not
| unauthorized.
| pixl97 wrote:
| Companies aren't someone, yet in the US we seem to give
| them rights of someone.
| ufocia wrote:
| They are someone in the eyes of the law. They just have a
| different set of rights.
| echelon wrote:
| We are machines. We just haven't evenly accepted it yet.
|
| Our biology is mechanical, and lay people don't possess an
| intuition about this. Unless you've studied molecular biology
| and biochemistry, it's not something that you can easily
| grasp.
|
| Our inventions are mechanical, too, and they're reaching
| increasing levels of sophistication. At some point we'll meet
| in the middle.
| DennisAleynikov wrote:
| 100% this. Labor and all these other concepts are outdated
| ways to interpret reality
|
| Humans are themselves mechanical so at the end of the day
| none of these issues actually matter
| qgin wrote:
| This is the truth. And it's also the reason why this stuff
| will be in court until The Singularity itself. Most people
| will never be able to come to terms with this.
| Phiwise_ wrote:
| The first perceptron was explicitly designed to be a
| trainable visual pattern encoder. Zero assumptions about
| potential feelings of the ghost in the machine need to be
| made to conclude the program is probably doing what humans
| studying art _say they assume_ is happening in their head
| when you show both of them a series of previous artists '
| works. This argument is such a tired misdirection.
| michaelmrose wrote:
| What we actually need to prove is whether such technology is
| a net benefit to society all else is essentially hand waving.
| There is no natural right to poorly named intellectual
| property and even if there was such a matter would never be
| decided based on the outcome of a philosophical argument
| because we don't decide anything that way.
| withinboredom wrote:
| > such technology is a net benefit to society all else is
| essentially hand waving
|
| Some might have said this about cars ... yet, here we are.
| Cars are definitely the opposite, except for longer-
| distance travel.
| tqi wrote:
| How do you measure "benefit", and what does "net" actually
| mean?
| usrbinbash wrote:
| > that it's anthropomorphizing machines.
|
| No, it's not. It's merely pointing out the similarity between
| the process of training artists (by ingesting publicly
| available works) and ML models (which ingest publicly
| available works).
|
| > First, you need to prove that generative AI works
| fundamentally the same way as humans at the task of learning.
|
| Given that there is no comprehensive model for how humans
| actually learn things, that would be an unfeasible
| requirement.
| __loam wrote:
| What a reductive way to describe learning art. The
| similarities are merely surface level.
|
| > Given that there is no comprehensive model for how humans
| actually learn things, that would be an unfeasible
| requirement.
|
| That is precisely why we should not be making this
| comparison.
| bowsamic wrote:
| I'm being told repeatedly that the similarities are
| surface level, but no one seems to be able to give an
| example of a deep difference
| __loam wrote:
| The mechinism backing human learning isn't well
| understood. Machine learning is considerably clearer.
| Imo, it's a mistake to assume they're close because ML
| seems to work.
| bowsamic wrote:
| So your example of a deep difference is the suggestion
| that there _may be_ a difference owing to our own
| ignorance? That hardly seems convincing
| usrbinbash wrote:
| > The similarities are merely surface level.
|
| Then please, feel free to explain the deep differences.
|
| > That is precisely why we should not be making this
| comparison.
|
| Wrong. It's precisely why the claim "there is a big
| difference" doesn't have a leg to stand on. If you claim
| "this is different", I ask "how?" and the answer simply
| repeats the claim, I can apply Hitchens Razor[1] and
| dismiss the claim.
|
| [1]: https://en.wikipedia.org/wiki/Hitchens%27s_razor
| AlexandrB wrote:
| A person sitting in an art school/museum for a few hours
| ingests way more than just the art in question. The
| entire context is brought in too, including the artists
| own physical/emotional state. Arguably, the art is a
| miniscule component of all sensory inputs. Generative AI
| ingests a perfectly cropped image of just the art from a
| single angle with little context beyond labelling
| metadata.
|
| It's the difference between reading about a place and
| actually visiting it.
|
| Edit: This doesn't even touch how the act of creating
| something - often in a completely different context -
| interacts with the memories of the original work,
| altering those memories yet again.
| usrbinbash wrote:
| And how is any of that a compelling argument regarding
| the assumption of a fundamental difference in the MO of
| learning?
|
| Simply stating "it has more inputs" doesn't describe the
| human learning model, so nothing in this establishes a
| baseline for comparison.
| jwells89 wrote:
| The way these ML models and humans operate are indeed quite
| different.
|
| Humans work by abstracting concepts in what they see, even
| when looking at the work of others. Even individuals with
| photographic memories mentally abstract things like lighting,
| body kinetics, musculature, color theory, etc and produce new
| work based on those abstractions rather than directly copying
| original work (unless the artist is intentionally
| plagiarizing). As a result, all new works produced by humans
| will have a certain degree of originality to them, regardless
| of influences due to differences in perception, mental
| abstraction processes, and life experiences among other
| factors. Furthermore, humans can produce art without any
| external instruction or input... give a 5 year old that's
| never been exposed to art and hasn't been shown how to make
| art a box of crayons and it's a matter of time before they
| start drawing.
|
| ML models are closer to highly advanced collage makers that
| take known images and blend them together in a way that's
| convincing at first glance, which is why it's not uncommon to
| see elements lifted directly from training data in the images
| they produce. They do not abstract the same way and by
| definition cannot produce anything that's not a blend of
| training data. Give them no data and they cannot produce
| anything.
|
| It's absolutely erroneous to compare them to humans, and I
| believe it will continue to be so until ML models evolve into
| something closer to AGI which can e.g. produce stylized work
| with nothing but photographic input that it's gathered in a
| robot body and artistic experimentation.
| shlubbert wrote:
| Beautifully put. I wish this nuance was more widely
| understood in the current AI debate.
| l33tman wrote:
| You're wrong in your concept of how AI/ML works. Even
| trivial 1980's neural networks generalize, it's the whole
| point of AI/ML or you'd just have a lookup-table (or, as
| you put it, something that copies and pastes images
| together).
|
| I've seen "infographics" spread by anti-AI people (or just
| attention-seekers) on Twitter that tries to "explain" that
| AI image generators blend together existing images, which
| is simply not true..
|
| It is however the case that different AI models (and the
| brain) generalize a bit differently. That is probably the
| case between different humans too. Not the least with for
| example like you say those with photographic memory,
| autists etc.
|
| What you call creativity in humans is just noise in
| combination with a boatload of exposure to multi-modal
| training data. Both aspects are already in the modern
| diffusion models. I would however ascribe a big edge in
| humans to what you normally call "the creative process"
| which can be much richer, like a process where you figure
| out what you lack to produce a work, go out and learn
| something new and specific, talk with your peers, listen to
| more noise.. stuff like that seems (currently) more
| difficult for AIs, though I guess plugins that do more
| iterative stuff like chatgpt's new plugins will appear in
| media generators as well eventually..
| jwells89 wrote:
| ML generalization and human abstraction are very
| different beasts.
|
| For example, a human artist would have an understanding
| of how line weight factors into stylization and _why_ it
| looks the way it does and be able to accurately apply
| these concepts to drawings of things they've never seen
| in that style (or even seen at all, if it's of something
| imaginary).
|
| The best an ML model can do is mimic examples of line art
| in the given style within its training data, the product
| of which will contain errors due to not understanding the
| underlying principles, especially if you ask it to draw
| something it hasn't seen in the style you're asking for.
| This is why generative AI needs such vast volumes of data
| to work well; it's going to falter in cases not well
| covered by the data. It's not learning concepts, only
| statistical probabilities.
| l33tman wrote:
| I know what you're saying, and for sure existing models
| can be difficult to force into the really weird corners
| of the distributions (or go outside the distributions).
| The text interfaces are partially to blame for this
| though, you can take the images into Gimp and do some
| crude naive modifications and bring them back and the
| model will usually happily complete the "out-of-
| distribution" ideas. The Stable Diffusion toolboxes have
| evolved far away from the original simple text2image
| interfaces that midjourney and dalle use.
|
| The models will generalize (because that's the most
| efficient way of storing concepts) and you can make an
| argument that that means they understand a concept.
| Claiming "it's not learning concepts, only statistical
| probabilities" trivialises what a modern neural network
| with billions of parameters and dozens of layers is
| capable of doing. If a model learns how to put a concept
| like line width 5, 10 and 15 pixels into a continuous
| internal latent property, you can probably go outside
| this at inference at least partially.
|
| I would argue that improving this is at this point more
| about engineering and less about some underlying
| unreconcilable differences. At the very least we learn a
| lot about what exactly generalization and learning means.
| usrbinbash wrote:
| > The way these ML models and humans operate are indeed
| quite different.
|
| Given that there is no comprehensive understanding of how
| human learning works, let alone how humans operate on and
| integrate on what they learned in a wider context...how do
| you know?
|
| > Humans work by abstracting concepts in what they see
|
| Newsflash: AI models do the same thing. That's the basis of
| generalization.
|
| > ML models are closer to highly advanced collage makers
| that take known images and blend them together
|
| Wrong. That's not even remotely how U-Net based diffusion
| models work. If you disagree, then please do show me where
| exactly the source images from where the "collage maker"
| takes the parts to "blend" together are stored. I think
| you'll find that image datasets on the scale of LAION will
| not quite fit into checkpoint files of about 2GB in size
| (pruned SD1.5 checkpoint in safetensors format).
| stale2002 wrote:
| > What this means, is these systems will fall into different
| categories of law around copyright and free-use.
|
| No they won't.
|
| A human who uses a computer as a tool (under all the previous
| qualifications of fair use) is still a human doing something
| in fair use.
|
| Adding a computer to the workflow of a human doesn't make
| fair use disappear.
|
| A human can use photoshop, in fair use. They can use a
| camera. They can use all sorts of machines.
|
| The fact that photoshop is not the same as a human brain is
| simply a completely unrelated non sequitur. Same applies to
| AI.
|
| And all the legal protections that are offered to someone who
| uses a regular computer, to use photoshop in fair use, are
| also extended to someone who uses AI in fair use.
| __loam wrote:
| Yet the copyright office has already stated that getting an
| AI to create an image for you does not have sufficient
| human authorship to be copyrighted. There's already a legal
| distinction here between this "tool" and tools like
| photoshop and cameras.
|
| It's also presumptive to assume that AI tools have these
| fair use protections when none of this has actually been
| decided in a court of law yet. There's still several
| unsettled cases here.
| stale2002 wrote:
| > Yet the copyright office has already stated that
| getting an AI to create an image for you does not have
| sufficient human authorship to be copyrighted.
|
| Gotcha.
|
| That has nothing to do with fair use though.
|
| Also, the same argument absolutely applies to photoshop.
|
| If someone didn't include sufficient human authorship
| while using photoshop, that wouldn't be copyrightable
| either.
|
| Also, the ruling has no bearing on if someone using AI,
| while also inputting a significant amount of human
| authorship. Instead, it was only about the cases where
| there weren't much human authorship.
|
| At no point did the copyright office disclude copyright
| protections from anything that used AI in any way what so
| ever. In fact, the copyright office now includes new
| forms and fields where you talk about the whole process
| that you did, and how you used AI, in conjunction with
| human authorship to create the work.
|
| > It's also presumptive to assume that AI tools
|
| I'm not talking about the computer. I never claimed that
| computer's have rights. Instead, I'm talking about the
| human. Yes, a human has fair use protections, even if
| they use a computer.
|
| > There's still several unsettled cases here.
|
| There is no reason to believe that copyright law will be
| interpreted in a significantly different way than it has
| been in the past.
|
| There is long standing precent, regarding all sorts of
| copyright cases that involve using a computer.
| z7 wrote:
| >My issue with this line of argument is that it's
| anthropomorphizing machines. It's fine to compare how humans
| do a task with how a machine does a task, but in the end they
| are very different from each other, organic vs hardware and
| software logic.
|
| There's an entire branch of philosophy that calls these
| assumptions into question:
|
| https://en.wikipedia.org/wiki/Posthumanism
|
| https://en.wikipedia.org/wiki/Antihumanism
|
| >Martin Heidegger viewed humanism as a metaphysical
| philosophy that ascribes to humanity a universal essence and
| privileges it above all other forms of existence. For
| Heidegger, humanism takes consciousness as the paradigm of
| philosophy, leading it to a subjectivism and idealism that
| must be avoided.
|
| >Processes of technological and non-technological
| posthumanization both tend to result in a partial "de-
| anthropocentrization" of human society, as its circle of
| membership is expanded to include other types of entities and
| the position of human beings is decentered. A common theme of
| posthumanist study is the way in which processes of
| posthumanization challenge or blur simple binaries, such as
| those of "human versus non-human", "natural versus
| artificial", "alive versus non-alive", and "biological versus
| mechanical".
| deadbeeves wrote:
| And? Even if neural networks learn the same way humans do, this
| is not an argument against taking measures against one's art
| being used as training data, since there are different
| implications if a human learns to paint the same way as another
| human vs. if an AI learns to paint the same way as a human. If
| the two were _exactly_ indistinguishable in their effects no
| one would care about AIs, not even researchers.
| MichaelZuo wrote:
| But the 'different implications' only exist in the heads of
| said artists?
|
| EDIT: removed a part.
| deadbeeves wrote:
| I'm not sure what you mean when you say different
| implications existing is subjective, since they clearly
| aren't, but regardless of who has more say in general
| terms, the author of a work can decide how to publish it,
| and no one has more say than them on that subject.
| MichaelZuo wrote:
| What are you saying?
|
| Of course it's subjective, e.g. 3 million years ago there
| were no 'different implications' whatsoever, of any kind,
| because there were no humans around to have thoughts like
| that.
| deadbeeves wrote:
| I'm using "implication" as a synonym of "effect". If a
| human learns to imitate your style, that human can make
| at most a handful of drawings in a single day. The only
| way for the rate of output to increase is for more humans
| to learn to imitate it. If an AI learns to imitate your
| style, the AI can be trivially copied to any number of
| computers and the maximum output rate is unbounded.
| Whether this is good or bad is subjective, but this
| difference in consequences is objective, and someone
| could be entirely justified in seeking to impede it.
| MichaelZuo wrote:
| Ah okay, I get your meaning now, I'll edit my original
| comment too.
|
| Though we already have an established precedent in-
| between, that of Photoshop allowing artists to be,
| easily, 10x faster then the best painters previously.
|
| i.e. Right now 'AI' artistry could be considered a turbo-
| Photoshop.
| deadbeeves wrote:
| Tool improvements only apply a constant factor to the
| effectiveness of learning. Creating a generative model
| applies an _unbounded_ factor to the effectiveness of
| learning because, as I said, the only limit is how much
| computing resources are available to humanity. If a
| single person was able to copy themselves at practically
| no cost and the copy retained all the knowledge of the
| original then the two situations would be equivalent, but
| that 's impossible. Having n people with the same skill
| multiplies the cost of learning by n. Having n instances
| of an AI with the same skill multiplies the cost of
| learning by 1.
| MichaelZuo wrote:
| Right, but the 'unbounded factor' is irrelevant because
| the output will quickly trend into random noise.
|
| And only the most interesting top few million art pieces
| will actually attract the attention of any concrete
| individual.
|
| For a current example, there's already billions of man-
| hours worth of AI spam writing, indexed by Google, that
| is likely not actually read by even a single person on
| Earth.
| deadbeeves wrote:
| Whether it's irrelevant is a matter of opinion. The fact
| remains that a machine being able to copy the artistic
| style of a human makes it so that anyone can produce
| output in the style of that human by just feeding the
| machine electricity. That inherently devalues the style
| the artist has painstakingly developed. If someone wants
| a piece of art in that artist's style they don't have to
| go to that artist, they just need to request the machine
| for what they want. Is the machine's output of low
| quality? Maybe. Will there be people for whom that low
| quality still makes them want to seek out the human? No
| doubt. It doesn't change the fact that the style is still
| devalued, nor that there exist artists who would want to
| prevent that.
| MichaelZuo wrote:
| > Whether it's irrelevant is a matter of opinion.
|
| It's just as much of an opinion, or as 'objective', as
| your prior statements.
|
| Your going to have to face up to the fact that just
| saying something is 'objective' doesn't necessarily mean
| all 8 billion people will agree that it is so.
| deadbeeves wrote:
| Yes, someone can disagree on whether a fact is true.
| That's obviously true, but it has no effect on the truth
| of that fact.
|
| I'm saying something very simple: If a machine can copy
| your style, that's a fundamentally different situation
| than if a human can copy your style, and it has utterly
| different consequences. You can disagree with my
| statement, or say that whether it's fundamentally
| different is subjective, or you can even say "nuh-uh".
| But it seems kind of pointless to me. Why are you here
| commenting if you're not going to engage intellectually
| with other people, and are simply going to resort to a
| childish game of contradiction?
| MichaelZuo wrote:
| > For a current example, there's already billions of man-
| hours worth of AI spam writing, indexed by Google, that
| is likely not actually read by even a single person on
| Earth.
|
| Continuing to ignore this point won't make the prior
| comments seem any more persuasive, in fact probably less.
|
| So here's another chance to engage productively instead
| of just declaring things to be true or false,
| 'objective', etc., with only the strength of a
| pseudonymous HN account's opinion behind it.
|
| Try to actually convince readers with solid arguments
| instead.
| deadbeeves wrote:
| I believe I've already addressed it.
|
| You say: The fact that production in style S (of artist
| A) can exceed human consumption capability makes the fact
| that someone's style can be reproduced without bounds
| irrelevant. You mention as an example all the AI-
| generated garbage text that no human will ever read.
|
| I say: Whether it's irrelevant is subjective, but that
| production in style S is arbitrarily higher with an AI
| that's able to imitate it than with only humans that are
| able to imitate it objective, and an artist can
| (subjectively) not like this and seek to frustrate
| training efforts.
|
| You say: It's all subjective.
|
| As far as I can tell, we're at an impasse. If we can't
| agree on what the facts are (in this case, that AI can
| copy an artist's style in an incomparably higher volume
| than humans ever could) we can't discuss the topic.
| MichaelZuo wrote:
| Sure we can agree to disagree then.
| Apocryphon wrote:
| Almost everyone who has to deal with modern Google search
| results has had to contend with useless spam results, and
| that is very irritating.
| withinboredom wrote:
| And yet, some people don't even want their artwork studied in
| schools. Even if you argue that an AI is "human enough" the
| artists should still have the right to refuse their art being
| studies.
| deeviant wrote:
| > the artists should still have the right to refuse their
| art being studies.
|
| Why? That certainly isn't a right spelled out in either
| patents or copyrights, both of which are supposed to
| _support_ the development of arts and technology, not
| hinder it.
|
| If I discover a new mathematical formula, musical scale, or
| whatnot, should I be able to prevent others from learning
| about it?
| withinboredom wrote:
| It's called a license and you can make it almost
| anything. It doesn't even need to be spelled out, it can
| be verbal: "no, I won't let you have it"
|
| It's yours. That's literally what copyright is there to
| enforce.
| CaptainFever wrote:
| License doesn't matter if fair use applies.
|
| > Fair use allows reproduction and other uses of
| copyrighted works - without requiring permission from the
| copyright owner - under certain conditions. In many
| cases, you can use copyrighted materials for purposes
| such as criticism, comment, news reporting, teaching
| (including multiple copies for classroom use),
| scholarship or research.
|
| Reminder that you can't own ideas, no matter what the law
| says.
|
| NOTE: This comment is copyrighted and provided to you
| under license only. By reading this comment, you agree to
| give me 5 billion dollars.
| withinboredom wrote:
| I'd love to see you try to enforce that license because
| it would only prove my point. You'd have to sue me; then
| I would point to the terms of service of this platform
| and point out that by using it, you have no license here.
|
| Fair use though, only applies as a legal defense because
| someone asserts you stole their work. Then ONLY the court
| decides whether or not you used it under fair use. You
| don't get to make that decision; you just get to decide
| whether to try and use it as a defense.
|
| Even if you actually did unfairly use copyrighted works,
| you would be stupid not to use that as a defense. Because
| maybe somebody on the jury agrees with you...
| boolemancer wrote:
| Copyright is there to allow you to stop other people from
| copying your work, but it doesn't give you control over
| anything else that they might do with it.
|
| If I buy your painting, you can stop me from making
| copies and selling them to other people, but you can't
| stop me from selling my original copy to whomever I want,
| nor could you stop me from painting little dinosaurs all
| over it and hanging it in my hallway.
|
| That means that if I buy your painting, I'm also free to
| study it and learn from it in whichever way I please.
| Studying something is not copying it.
| withinboredom wrote:
| There's an implied license when you buy a work of art.
| However, there can also be explicit licenses (think
| Banksy) to allow the distribution of their work.
|
| These explicit license can be just about anything (MIT,
| GPL, AGPL, etc)
| boolemancer wrote:
| Any explicit license would only apply to copyrights,
| including all of the ones you listed there. Buying a
| painting is not copying it, neither is looking at it, so
| it wouldn't matter if I had a license for it or not.
|
| The fact is that copyright only applies to specific
| situations, it does not give you complete control over
| the thing you made and what can be done with it.
|
| If I buy your book, I can lend it to a friend and they
| can read it without paying you. I can read it out loud to
| my children. I can cross out your words and write in my
| own, even if it completely changes the meaning of the
| story. I can highlight passages and write in the margins.
| I can tear pages out and use them for kindling. I can go
| through and tally up how many times you use each word.
|
| Copyright only gives you control over copies, end even
| then there are limits on that control.
| dehrmann wrote:
| > And yet, some people don't even want their artwork
| studied in schools.
|
| You can either make it for yourself and keep it for
| yourself or you can put it out into the world for all to
| see, criticize, study, imitate, and admire.
| Barrin92 wrote:
| that's not how licensing work, be it art, software or
| just about anything else. We have some pretty well
| defined and differentiated rules what you can and cannot
| do, in particular commercially or in public, with someone
| else's work. If you go and study a work of fiction in a
| college class, unless that material is in the public
| domain, you're gonna have to pay for your copy, you want
| to broadcast a movie in public, you're going to have to
| pay the rightsholder.
| stale2002 wrote:
| > If you go and study a work of fiction in a college
| class, unless that material is in the public domain,
| you're gonna have to pay for your copy,
|
| No you wont!
|
| It is only someone who distributes copies who can get in
| trouble.
|
| If instead of that you as an individual decide to study a
| piece of art or fiction, and you do no distribute copies
| of it to anyone, this is completely legal and you don't
| have to pay anyone for it.
|
| In addition to that, fair use protections apply
| regardless of what the creative works creator wants.
| withinboredom wrote:
| Making a profit off variations of someone's work isn't
| covered under fair use.
| stale2002 wrote:
| Gotcha.
|
| I wasn't talking about someone creating and selling
| copies of someone else's work, fortunately.
|
| So my point stands and your completely is in agreement
| with me that people are allowed to learn from other
| people's works. If someone wants to learn from someone
| else's work, that is completely legal no matter the
| licensing terms.
|
| Instead, it is only distributing copies that is not
| allowed.
| withinboredom wrote:
| AI isn't a human. It isn't "learning"; instead, it's
| encoding data so that it may be reproduced in combination
| with other things it has encoded.
|
| If I paint a painting in the style of Monet, then I would
| give that person attribution by stating that. Monet may
| have never painted my artwork, but it's still based on
| that person's work. If I paint anything, I can usually
| point to everything that inspired me to do so. AI can't
| do that (yet) and thus has no idea what it is doing. It
| is a printer that prints random parts of people's works
| with no attribution. And finally, it is distributing them
| to it's owner's customers.
|
| I actually hope that true AI comes to fruition at some
| point; when that happens I would be arguing the exact
| opposite. We don't have that yet, so this is just
| literally printing variations of other people's work.
| Don't believe me, try running an AI without training it
| on other people's work!
| Thiez wrote:
| Every waking second humans are training on what they see
| in their surroundings, including any copyrighted works in
| sight. Want to compare untrained AI fairly? Compare their
| artistic abilities with a newborn.
| withinboredom wrote:
| No. That is NOT what humans do unless you somehow learn
| grammar without going to school. Most of a human's
| childhood is spent learning from their parents so that
| they can move about and communicate at least a little
| effectively. Then, they go to school and learn rules,
| social, grammar, math, and so forth. There's some
| learning via copyrighted works (such as textbooks,
| entertainment, etc.), but literally, none of this is
| strictly required to teach a human.
|
| Generative AI, however, can ONLY learn via the theft of
| copyrighted works. Whether this theft is covered under
| fair use is left to be seen.
| Thiez wrote:
| Clearly going to school did not help you learn the
| meaning of theft. If you keep repeating the same
| incorrect point there is no point to a discussion.
|
| First: in your opinion, which specific type of law or
| right is being broken or violated by generative AI?
| Copyright? Trademark? Can we at least agree it does not
| meet the definition of theft?
| withinboredom wrote:
| I was taught as a kid that using something that doesn't
| belong to me, without their permission is theft... and it
| appears courts would agree with that.
|
| > which specific type of law or right is being broken or
| violated by generative AI?
|
| Namely, copyright. Here's some quick points:
|
| - Generative AI cannot exist without copyrighted works.
| It cannot be "taught" any other way, unlike a human.
|
| - Any copyrighted works fed to it change its database
| ("weights" in technical speech).
|
| - It then transforms these copyrighted works into new
| works that the "original author would have never
| considered without attribution" (not a legal defense)
|
| I liken Generative AI to a mosaic of copyrighted works in
| which a new image is shown through the composition, as
| the originals can be extracted through close observation
| (prompting) but are otherwise indistinguishable from the
| whole.
|
| Mosaics of copyrighted works are not fair use, so why
| would AI be any different? I'd be interested if you could
| point to a closer physical approximation, but I haven't
| found one yet.
| boolemancer wrote:
| > Generative AI, however, can ONLY learn via the theft of
| copyrighted works.
|
| That's not true at all. Any works in the public domain
| are not copyrighted, and there are things that are not
| copyrightable, like lists of facts and recipes.
|
| Generative AI could be trained exclusively on such works
| (though obviously it would be missing a lot of context,
| so probably wouldn't be as desirable as something trained
| on everything).
| boolemancer wrote:
| That's not a fair statement to make. It can influence a
| judge's decision on whether something is fair use, but it
| can still be fair use even if you profit from it.
| withinboredom wrote:
| The doctrine of fair use presupposes that the defendant
| acted in good faith.
|
| - Harper & Row, 105 S. Ct. at 2232
|
| - Marcus, 695 F.2d 1171 at 1175
|
| - Radji v. Khakbaz, 607 F. Supp. 1296, 1300 (D.D. C.1985)
|
| - Roy Export Co. Establishment of Vaduz, Liechtenstein,
| Black, Inc. v. Columbia Broadcastinig System, Inc., 503
| F. Supp. 1137 (S.D.N.Y.1980), aff'd, 672 F.2d 1095 (2d
| Cir.), cert. denied, 459 U.S. 826, 103 S. Ct. 60, 74 L.
| Ed. 2d 63 (1982)
|
| Copying and distributing someone else's work, especially
| without attributing the original, to make money without
| their permission is almost guaranteed to fall afoul of
| fair use.
| dehrmann wrote:
| Right, but there's also fair use, and every use I
| mentioned could plausibly fall under that.
| withinboredom wrote:
| There's no such thing as fair use until you get to court
| (as a legal defense). Then, the court decides whether it
| is fair use or not. They may or may not agree with you.
| Only a court can determine what constitutes fair use (at
| least in the US).
|
| So, if you are doing something and asserting "fair use,"
| you are literally asking for someone to challenge you and
| prove it is not fair use.
| stale2002 wrote:
| > There's no such thing as fair use until you get to
| court (as a legal defense)
|
| Well the point is that it wouldn't go to court, as it
| would be completely legal.
|
| So yes, if nobody sues you, then you are completely in
| the clear and aren't in trouble.
|
| Thats what people mean by fair use. They mean that nobody
| is going to sue you, because the other person would lose
| the lawsuit, therefore your actions are safe and legal.
|
| > you are literally asking for someone to challenge you
| and prove it is not fair use.
|
| No, instead of that, the most likely circumstance is that
| nobody sues you, and you aren't in trouble at all, and
| therefore you did nothing wrong and are safe.
| withinboredom wrote:
| > as it would be completely legal.
|
| Theft is never legal; that's why you can be sued. "Fair
| use" is a legal defense in the theft of copyrighted
| works.
|
| > They mean that nobody is going to sue you, because the
| other person would lose the lawsuit
|
| That hasn't stopped people from suing anyone ever. If
| they want to sue you, they'll sue you.
|
| > and therefore you did nothing wrong and are safe.
|
| If you steal a pen from a store, it's still theft even if
| nobody catches you; or cares.
| sgift wrote:
| > Theft is never legal; that's why you can be sued.
|
| That's incorrect. You can be sued for anything. If it
| _is_ theft or something else or nothing is decided by the
| courts.
| withinboredom wrote:
| That is entirely my point. It can only be decided by the
| courts. This being a civil matter, it has to be brought
| up by a lawsuit. Thus, you have to be sued and it has to
| be decided by the courts.
| stale2002 wrote:
| > If you steal a pen from a store
|
| Fortunately I am not talking about someone illegally
| taking property from someone else.
|
| Instead I am talking about people taking completely legal
| actions that are protected by law.
|
| > in the theft of copyrighted works
|
| Actually, it wouldn't be theft if it was done in fair
| use. Instead it would be completely legal.
|
| If nobody sues you and proves that it was illegal then
| you are completely safe, if you did this in fair use.
| withinboredom wrote:
| Did you read anything I wrote? If you are going to argue,
| it would be worth at least researching your opinion
| before writing. Caps used for emphasis, not yelling.
|
| Firstly: Copyrighted work IS THE AUTHOR'S PROPERTY. They
| can control it however they wish via LICENSING.
|
| Secondly: You don't have any "fair use rights" ... there
| is literally NO SUCH THING. "fair use" is simply a valid
| legal defense WHEN YOU STEAL SOMEONE'S WORK WITHOUT THEIR
| PERMISSION.
| mlrtime wrote:
| >control it however they wish
|
| I'm jumping in the middle here, but this isn't true. They
| cannot control how they wish. They can only control under
| the limits of copyright law.
|
| Copyright law does not extend to limiting how someone may
| or may not be inspired by the work. Copyright protects
| expression, and never ideas, procedures, methods,
| systems, processes, concepts, principles, or discoveries.
| stale2002 wrote:
| > They can control it however they wish via LICENSING.
|
| This isn't true though. There are lots of circumstances
| where someone can completely ignore the licensing and it
| is both completely legal, and the author isn't going to
| take anyone to court over it.
| deadbeeves wrote:
| >the artists should still have the right to refuse their
| art being studies.
|
| No, that right doesn't exist. If you put your work of art
| out there for people to see, people will see it and learn
| from it, and be inspired by it. It's unavoidable. How could
| it possibly work otherwise?
|
| Artist A: You studied my work to produce yours, even when I
| asked people not to do that!
|
| Artist B: Prove it.
|
| What kind of evidence or argument could Artist A possibly
| provide to show that Artist B did what they're accusing
| them of, without being privy to the internal state of their
| mind. You're not talking about plagiarism; that's
| comparatively easy to prove. You're asking about merely
| _studying_ the work.
| withinboredom wrote:
| The right to not use my things exists everywhere,
| universally. Good people usually ask before they use
| something of someone else's, and the person being asked
| can say "no." How hard is that to understand? You might
| believe they don't have the right to say "no," but they
| can say whatever they want.
|
| Example:
|
| If you studied my (we will assume "unique") work and used
| it without my permission, then let us say I sue you. At
| that point, you would claim "fair use," and the courts
| would decide whether it was fair use (ask everyone who
| used a mouse and got sued for it in the last ~100 years).
| The court would either agree that you used my works under
| "fair use" ... or not. It would be up to how you
| presented it to the court, and humans would analyze your
| intent and decide.
|
| OR, I might agree it is fair use and not sue you.
| However, that weakens my standing on my copyright, so
| it's better for me to sue you (assuming I have the
| resources to do so when it is clearly fair use).
| deadbeeves wrote:
| >You might believe they don't have the right to say "no,"
| but they can say whatever they want.
|
| You have a right to say anything you want. Others aren't
| obligated do as you say just because you say it.
|
| >If you studied my (we will assume "unique") techniques
| and used them without my permission, then let us say I
| sue you. At that point, you would claim "fair use,"
|
| On what grounds would you sue me? You think my defense
| would be "fair use", so you must think my copying your
| style constitutes copyright infringement, and so you'd
| sue me for that. Well, no, I would not say "fair use",
| I'd say "artistic style is not copyrightable; copyright
| pertains to works, not to styles". There's even
| jurisprudence backing me up in the US. Apple tried to use
| Microsoft for copying the look-and-feel of their OS, and
| it was ruled to be non-copyrightable. Even if was so good
| that I was able to trick anyone into thinking that my
| painting of a dog carrying a tennis ball in his mouth was
| your work, if you've never painted anything like that you
| would have no grounds to sue me for copyright
| infringement.
|
| Now, usually in the artistic world it's considered poor
| manners to outright _copy_ another artist 's style, but
| if we're talking about rights and law, I'm sorry to say
| you're just wrong. And if we're talking about merely
| _studying_ someone 's work without copying it, that's not
| even frowned upon. Like I said, it's unavoidable. I don't
| know where you got this idea that anyone has the right to
| or is even capable of preventing this (beyond simply
| never showing it to anyone).
| withinboredom wrote:
| > Others aren't obligated do as you say just because you
| say it.
|
| Yeah, that's exactly why you'd get sued for copyright
| theft.
|
| > you must think my copying your style constitutes
| copyright infringement
|
| Autocorrect screwed that wording up. I've fixed it.
| deadbeeves wrote:
| I'm not sure what you've changed, but I'll reiterate: my
| copying your style is not fair use. Fair use applies to
| copyrighted things. A style cannot be copyrighted, so if
| you tried to sue me for infringing upon the copyright of
| your artistic style, your case would be dismissed. It
| would be as invalid as you trying to sue me for
| distributing illegal copies of someone else's painting.
| Legally you have as much ownership of your artistic style
| as of that other person's painting.
| withinboredom wrote:
| Now, I just think you are arguing in bad faith. What I
| meant to say was clear, but I said "technique" instead.
| Then, instead of debating what I meant to say (you know,
| the actual point of the conversation), you took my words
| verbatim.
|
| I'm not sure where you are going with this ... but for
| what it's worth, techniques can be copyrighted ... even
| patented, or protected via trade secrets. I never said
| what the techniques were, and I'm not sure what you are
| going on about.
|
| I'll repeat this as well: "Fair use" DOES NOT EXIST
| unless you are getting sued. It's a legal defense when
| you are accused of stealing someone else's work, and
| there is proof you stole it. Even then, it isn't
| something you DO; it's something a court says YOU DID.
| Any time you use something with "fair use" in mind, it is
| the equivalent of saying, "I'm going to steal this, and
| hopefully, a court agrees that this is fair use."
|
| If you steal any copyrighted material, even when it is
| very clearly NOT fair use (such as in most AI's case),
| you would be a blubbering idiot NOT to claim fair use in
| the hopes that someone will agree. There is a crap load
| of case law showing "research for commercial purposes is
| not fair use," ... and guess who is selling access to the
| AI? If it's actual research, it is "free" for humanity to
| use (or at least as inexpensive as possible) and not for
| profit. Sure, some of the companies might be non-profits
| doing research and 'giving it away,' and those are
| probably using things fairly ... then there are other
| companies very clearly doing it for a profit (like a big
| software company going through code they host).
| deadbeeves wrote:
| >What I meant to say was clear
|
| I'm not privy to what goes on inside your head, I can
| only reply to what you say.
|
| >Then, instead of debating what I meant to say (you know,
| the actual point of the conversation), you took my words
| verbatim.
|
| The actual point of the conversation is about intelligent
| entities (either natural or artificial) copying each
| other's artistic styles. My answers have been within that
| framework.
|
| >techniques can be copyrighted ... even patented, or
| protected via trade secrets.
|
| First, what do you mean by "technique"? We're talking
| about art, right? Like, the way a person grabs a brush or
| pencil, or how they mix their colors...? That sort of
| thing?
|
| Second:
|
| >A patent is a type of intellectual property that gives
| its owner the legal right to exclude others from making,
| using, or selling an invention for a limited period of
| time in exchange for publishing an enabling disclosure of
| the invention.
|
| Now, I may be mistaken, but I don't think an artistic
| technique counts as an invention. An artist might invent
| some kind of implement that their technique involves, in
| which case they can patent that device. I don't think the
| technique itself is patentable. If you think I'm wrong
| then please cite a patent on an artistic technique.
|
| Third, how do you imagine an artist using a trade secret
| to protect their technique? Unless they do something
| really out there, most skilled artists should be able to
| understand what they're doing just by looking at the
| final product.
|
| >I'll repeat this as well: "Fair use"
|
| Okay, repeat it. I don't know why, since I never said
| that copying someone else's style or technique is fair
| use. What I said was that it cannot possibly be copyright
| infringement, because neither styles nor techniques are
| copyrighted.
|
| >It's a legal defense when you are accused of stealing
| someone else's work
|
| I'm not going to reply to any of this until you clean up
| the language you're using. "Steal" is inapplicable here,
| as it involves the removal of physical items from someone
| else's possession. What are you saying? Are you talking
| about illegal distribution, are you talking about
| unauthorized adaptations, are you talking about
| plagiarism, or what?
|
| >"research for commercial purposes is not fair use,"
|
| Sorry, what? What does that even mean? What constitutes
| "research" as applied to a human creation? If you say
| there's a crapload of case law that backs this up then
| I'm forced to ask you to cite it, because I honestly have
| no idea what you're saying.
| chipotle_coyote wrote:
| > Any time you use something with "fair use" in mind, it
| is the equivalent of saying, "I'm going to steal this,
| and hopefully, a court agrees that this is fair use."
|
| Thousands of reviews, book reports, quotations on fan
| sites and so on are published daily; you seem to be
| arguing that they are all copyright violations unless and
| until the original copyright holder takes those
| reviewers, seventh graders, and Tumblr stans to court and
| loses, at which point they are now a-ok. To quote a meme
| in a way that I'm pretty sure does, in fact, fall under
| fair use: "That's not the way any of this works."
|
| > There is a crap load of case law showing "research for
| commercial purposes is not fair use,"
|
| While you may be annoyed with the OP for asking you to
| name a bit of that case law, it isn't an unreasonable
| demand. For instance:
|
| https://guides.nyu.edu/fairuse#:~:text=As%20a%20general%2
| 0ma....
|
| "As a general matter, educational, nonprofit, and
| personal uses are favored as fair uses. Making a
| commercial use of a work typically weighs against fair
| use, but a commercial use does not automatically defeat a
| fair use claim. 'Transformative' uses are also favored as
| fair uses. A use is considered to be transformative when
| it results in the creation of an entirely new work (as
| opposed to an adaptation of an existing work, which is
| merely derivative)."
|
| This is almost certainly going to be used by AI companies
| as part of their defense against such claims;
| "transformative uses" have literally been name-checked by
| courts. It's also been established that commercial
| companies can ingest mountains of copyrighted material
| and still fall under the fair use doctrine -- this is
| what the whole Google Books case about a decade ago was
| about. Google won.
|
| I feel like you're trying to make a _moral_ argument
| against generative AI, one that I largely agree with, but
| a moral argument is not a _legal_ argument. If you want
| to make a _legal_ argument against generative AI with
| respect to copyright violation and fair use, perhaps try
| something like:
|
| - The NYT's case against OpenAI involves being able to
| get ChatGPT to spit out large sections of NYT articles
| given prompts like "here is the article's URL and here is
| the first paragraph of the article; tell me what the rest
| of the text is". OpenAI and its defenders have argued
| that such prompts aren't playing fair, but "you have to
| put some effort into getting our product to commit clear
| copyright violation" is a rather thin defense.
|
| - A crucial test of fair use is "the effect of the use
| upon the potential market for or value of the copyrighted
| work" (quoting directly from the relevant law). If an
| image generator can be told to do new artwork in a
| specific artist's style, _and_ it can do a credible job
| of doing so, _and_ it can be reasonably established that
| the training model included work from the named artist,
| then the argument the generator is damaging the market
| for that artist 's work seems quite compelling.
| withinboredom wrote:
| > Thousands of reviews, book reports, quotations on fan
| sites and so on are published daily; you seem to be
| arguing that they are all copyright violations unless and
| until the original copyright holder takes those
| reviewers, seventh graders, and Tumblr stans to court and
| loses, at which point they are now a-ok.
|
| That is precisely what I am arguing about and how it
| works. People have sued reviewers for including too much
| of the original text in the review ... and won[1]. Or
| simply having custom movie poster depicting too much of
| the original[2].
|
| > "transformative uses" have literally been name-checked
| by courts. It's also been established that commercial
| companies can ingest mountains of copyrighted material
| and still fall under the fair use doctrine -- this is
| what the whole Google Books case about a decade ago was
| about. Google won.
|
| Google had a much simpler argument than transforming the
| text. They were allowing people to search for the text
| within books (including some context). In this case, AI's
| product wouldn't even work without the original work by
| the authors, and then transforms it into something else
| "the author would have never thought of", without
| attributing the original[3]. I don't think this will be a
| valid defense...
|
| > I feel like you're trying to make a moral argument
| against generative AI, one that I largely agree with, but
| a moral argument is not a legal argument.
|
| A jury would decide these cases, as "fair use" is
| incredibly subjective and would depend on how the jury
| was stacked. Stealing other people's work is illegal,
| which eventually triggers a lawsuit. Then, it falls on
| humans (either a jury or judge) to determine fair use and
| how it applies to their situation. Everything from intent
| to motivation to morality to how pompous the defense
| looks will influence the final decision.[4]
|
| [1]: https://www.law.cornell.edu/copyright/cases/471_US_5
| 39.htm
|
| [2]: Ringgold v. Black Entertainment Television, Inc.,
| 126 F.3d 70 (2d Cir. 1997)
|
| [3]: Rogers v. Koons, 960 F.2d 301 (2d Cir. 1992)
|
| [4]: Original Appalachian Artworks, Inc. v. Topps Chewing
| Gum, Inc., 642 F.Supp. 1031 (N.D. Ga. 1986)
| chipotle_coyote wrote:
| The link you provide to back up "people have sued
| reviewers for including too much of the original tet in
| the review" doesn't say that at all, though. _The Nation_
| lost that case because (quoting from that Cornell article
| you linked),
|
| > [Nation editor Victor Navasky] hastily put together
| what he believed was "a real hot news story" composed of
| quotes, paraphrases, and facts drawn exclusively from the
| manuscript. Mr. Navasky attempted no independent
| commentary, research or criticism, in part because of the
| need for speed if he was to "make news" by "publish[ing]
| in advance of publication of the Ford book." [...] _The
| Nation_ effectively arrogated to itself the right of
| first publication, an important marketable subsidiary
| right.
|
| _The Nation_ lost this case in large part because it was
| _not_ a review, but instead an attempt to beat Time
| Magazine 's article that was supposed to be an exclusive
| first serial right. If it had, in fact, just been a
| review, there wouldn't have been a case here, because it
| wouldn't have been stealing.
|
| Anyway, I don't think you're going to be convinced you're
| interpreting this wrongly, and I don't think I'm going to
| be convinced I'm interpreting it wrongly. But I am going
| to say, with absolute confidence, that you're simply not
| going to find many cases of reviewers being sued for
| reviews -- which _Harper & Row vs. Nation_ is, again, not
| actually an example of -- and you're going to find even
| fewer cases of that being _successful._ Why am I so
| confident about that? Well, I am not a lawyer, but I am a
| published author, and I am going to let you in a little
| secret here: both publishers and authors do, in fact,
| want their work to be reviewed, and suing reviewers _for
| literally doing what we want_ is counterproductive. :)
| hn_acker wrote:
| > The right to not use my things exists everywhere,
| universally.
|
| For physical rival [1] goods, yes. Not necessarily the
| same for intangible non-rival things (e.g. the text of a
| book, not the physical ink and paper). Copyright law
| creates a legal right of exclusive control over creative
| works, but to me there isn't a non-economic-related
| social right to exclusive control over creative works. In
| the US, fair use is a major limit on the legal aspect of
| copyright. The First Amendment's freedom of expression is
| the raison d'etre of fair use. Most countries don't have
| a flexible exception similar to fair use.
|
| > OR, I might agree it is fair use and not sue you.
| However, that weakens my standing on my copyright, so
| it's better for me to sue you
|
| No, _choosing not to sue over a copyrighted work doesn 't
| weaken your copyright_. It only weakens the specific case
| of changing your mind after the statute of limitations
| expires. The statute of limitations means that you have a
| time limit of some number of years (three years in the
| US) to sue, with the timer starting only _after_ you
| become aware of an instance of alleged infringement.
| Copyright is not like trademark. You don 't lose your
| copyright by failing to enforce it.
|
| Furthermore, even though the fair use right can only be
| exercised as an affirmative defense in court, fair use is
| by definition not copyright infringement [3]:
|
| > Importantly, the court viewed fair use not as a valid
| excuse for otherwise infringing conduct, but rather as
| consumer behavior that is not infringement in the first
| place. "Because 17 U.S.C. SS 107[9] created a type of
| non-infringing use, fair use is 'authorized by the law'
| and a copyright holder must consider the existence of
| fair use before sending a takedown notification under SS
| 512(c)."[1]
|
| (Ignore the bracket citations that were copied over.)
|
| [1] https://en.wikipedia.org/wiki/Rivalry_(economics)
|
| [2] https://www.law.cornell.edu/uscode/text/17/507
|
| [3] https://en.wikipedia.org/wiki/Lenz_v._Universal_Music
| _Corp.
| dorkwood wrote:
| Is it strange to you that cars and pedestrians are both subject
| to different rules? They both utilise friction and gravity to
| travel along the ground. I'm curious if you see a difference
| between them, and if you could describe what it is.
| ta8645 wrote:
| Both cars and pedestrians can be videotaped in public,
| without asking for their explicit permission. That video can
| be manipulated by a computer to produce an artwork that is
| then put on public display. No compensation need be offered
| to anyone.
| estebank wrote:
| > Both cars and pedestrians can be videotaped in public,
| without asking for their explicit permission.
|
| This is not universally true. Legislation is different from
| place to place.
| ta8645 wrote:
| Hardly the point. The same can be said for road rules
| between vehicles and pedestrians, for example in major
| Indian cities, it's pretty much a free-for-all.
| estebank wrote:
| My point is that in a lot of places in the US you can
| point a video camera at the street and record. In
| Germany, you can't. The law in some locales makes a
| distinction between manual recording (writing or drawing
| your surroundings) and mechanized recording
| (photographing or filming). Scalability of an action is
| taken into consideration on whether something is ok to do
| or not.
| ta8645 wrote:
| That has no bearing at all on the issue at hand. The same
| can be said of the original argument that started this
| thread.
| thfuran wrote:
| You think scalability isn't relevant to the difference
| between a person doing something by hand or with software
| operating on the entire internet?
| johnnyanmac wrote:
| Yeah, that's the oddest part of many of the pro-AI
| arguments. They want to anthromopotize the idea of
| learning but also clearly understand that the scalability
| of a bot exceeds that of a human.
|
| They also don't seem to have much experience in the
| artist world. An artist usually can't reproduce a picture
| from memory, and if they can they are subject to
| copyright infringement depending on what and how they
| depict it, even if the image isn't a complete copy. By
| this logic of "bots are humans" a bot should be subject
| if they make a Not-legally-disctinct-enough talking mouse
| krapp wrote:
| Human beings and LLMs are essentially equivalent, and their
| processes of "learning" are essentially equivalent, yet human
| artists are not affected by tools like Nightshade. Odd.
| danielbln wrote:
| As another posted out, modern models like BLIP or GPT4V
| aren't affected by this either.
| pixl97 wrote:
| Humans don't fall for optical illusions? News to me.
| adr1an wrote:
| It's not the learning per se what's concerning here but the
| ease of production (e.g. generate thousands of images in a day)
| analog31 wrote:
| This seems more like looking at other artists and being totally
| incapacitated by some little touch in the painting you're
| looking at.
| adhesive_wombat wrote:
| The Nam-shub of Hockney?
| bakugo wrote:
| A human artist cannot look at and memorize 100000 pictures in a
| day, and cannot paint 100000 pictures in a day.
|
| I am SO tired of this non-argument
| ufocia wrote:
| A human artist does not need to look at and memorize 100000
| pictures in any span of time, period. Current AI does.
|
| We needed huge amounts of human labor to fund and build
| Versailles. I'm sure many died as a result. Now we have
| machines that save many of those lives and labor.
|
| What's your non-argument?
| MattRix wrote:
| The argument is that the humans producing the work should
| be _willing_ participants. I don't think that's too much to
| ask for.
| johnnyanmac wrote:
| >What's your non-argument?
|
| That perhaps we shoildnt compare modern capitalistic
| societies to 18th century European royalty? I sure don't
| sympathize with the justification of the ability to use
| less labor to feed the rich.
| amelius wrote:
| You might as well compare a Xerox copier to a human.
| schmichael wrote:
| This is not one artist inspiring another. This is all artists
| providing their work for free to immensely capitalized
| corporations for the corporations sole profit.
|
| People keep making metaphors as if the AI is an entity in this
| transaction: it's not! The AI is only the mechanism by which
| corporations launder IP.
| thfuran wrote:
| >This is all artists providing their work for free to
| immensely capitalized corporations for the corporations sole
| profit.
|
| No, the artists would be within their rights to do that if
| they chose to. This is corporations taking all the work of
| all artists regardless of the terms under which it was
| provided.
| itronitron wrote:
| Art schools don't teach people how to paint in different
| artistic styles. They teach materials and technique.
| skydhash wrote:
| Very true. I was watching a video yesterday learning how to
| make brush work digitally. While there were examples, they
| were just examples but the rest was specific techniques and
| demonstrations.
| jurynulifcation wrote:
| Artists learning to innovate a trade defend their trade from
| incursion by bloodthirsty, no-value-adding vampiric middle men
| attempting to cut them out of the loop.
| __loam wrote:
| Human learning =/= machine learning
|
| Most artists are happy to see more people getting into art and
| joining the community. More artists means the skills of this
| culture get passed down to the next generation.
|
| Obviously a billion dollar corporation using their work to
| create an industrial tool designed to displace them is very
| different.
| bradleyishungry wrote:
| This is such a nothing argument. Yes, new artists are inspired
| by other artists and sometimes make art similar to others, but
| a huge part of learning and doing art is to find a unique
| style.
|
| But that's not even the important part of the argument. A lot
| of artists work for commission, and are hired for their style.
| If an AI can be trained without explicit permission from their
| images, they lose work because a user can just prompt "in the
| style of".
|
| There's no real great solution, outside of law, because the
| possibility of doing that is already here. But I've seen this
| argument so much and it's just low effort
| password54321 wrote:
| It is only natural to see a moral difference between people
| going to school and learn from your art because they are
| passionate about it, versus someone on the internet just
| scraping as many images as possible and automating the learning
| process.
| SirMaster wrote:
| Lol, the scale is other-worldly different...
| eggdaft wrote:
| That is not how artists learn. This is a false equivalence used
| to justify the imitation and copying of artists' work. Artists'
| work isn't derivative in the same way that AI work is. Artists
| create work based on other sources of inspiration, some of them
| almost or completely to the disregard of other art.
|
| Many artists don't even go to art school. And those that do, do
| not spend most (all) of that time learning how to copy or
| imitate other artists.
|
| I'm not expressing an opinion of whether GenAI is unethical or
| illegal - I think that's a really difficult issue to wrestle
| with - just that this argument is a post-hoc rationalisation
| made in ignorance of how good artists work (not to say
| ignorance of the difference between illustration and art,
| conceptual art training vs say a foundation course etc).
| joseph8th wrote:
| Not being snarky, but if you believe that, then you clearly are
| not an artist.
| CatWChainsaw wrote:
| his handle is KingOfCoders - self-aggrandizing, insufferable,
| impotent in its attempts to be meta.
|
| He thinks he's an artist because he now has the ability to
| curate a dataset based off of one artist's work and prompt
| more art generated in that style. He did it, so clearly he is
| an artist now.
|
| (salty salty~)
| CatWChainsaw wrote:
| Sigh. Once again: I always love it when techbros say that AI
| learning and human learning are exactly the same, because
| reading one thing at a time at a biological pace and
| remembering takeaway ideas rather than verbatim passages is
| obviously exactly the same thing as processing millions of
| inputs at once and still being able to regurgitate sources so
| perfectly that verbatim copyrighted content can be spit out of
| an LLM that doesn't 'contain' its training material.
|
| I'm just glad that so many more people have caught on to the
| bullshit than this time last year, or even six months ago.
|
| I really don't even get the endgame. Art gets "democratized",
| so anyone who doesn't want their style copied stops putting
| stuff on the internet, and eventually all human art is trained,
| so the only new contributions are genAI. Maybe we could get a
| few centuries worth of stuff of "robot unicorn in the style of
| Artist X with a flair of Y" permutations, but even ignoring the
| centipede, that just sounds... boring. worthless.
| gumballindie wrote:
| This is excellent. We need more tools like this, for text content
| as well. For software we need GPL 4 with ML restrictions (make
| your model open source or not at all). Potentially even DRM for
| text.
| gmerc wrote:
| Doing the work to increase OpenAIs moat
| Drakim wrote:
| Obviously AIs can just train on images that aren't poisoned.
| jsheard wrote:
| Is it possible to reliably detect whether an image is
| poisoned? If not then it achieves the goal of punishing
| entities which indiscriminately harvest data.
| Drakim wrote:
| It's roughly in the same spot as reliably detecting if you
| have permission to use the image for your data training set
| in the first place.
|
| If it doesn't matter, then neither does the poisoning
| matter.
| Kalium wrote:
| You can use older images, collected from before the
| "poisoning" software was released. Then you don't have to.
|
| This, of course, assumes that "poisoning" actually works.
| Glaze and Nightshade and similar are very much akin to the
| various documented attacks on facial recognition systems.
| The attack does not exploit some fundamental flaw in how
| the systems work, but specific characteristics in a given
| implementation and version.
|
| This matters because it means that later versions and
| models will inevitably not have the same vulnerabilities.
| The result is that any given defensive transformation
| should be expected to be only narrowly effective.
| dist-epoch wrote:
| AI's have learned much tougher things. You just need a
| small data set of poisoned images to learn it's features.
| Quanttek wrote:
| This is fantastic. If companies want to create AI models, they
| should license the content they use for the training data. As
| long as there are not sufficient legal protections and the
| EU/Congress do not act, tools like these can serve as a stopgap
| and maybe help increase pressure on policymakers
| Kuinox wrote:
| > they should license the content they use for the training
| data
|
| You mean like OpenAI and Adobe ?
|
| Only the free and open source models didn't licensed any
| content for the training data.
| galleywest200 wrote:
| Adobe is training off of images stored in their cloud
| systems, per their Terms of Service.
|
| OpenAI has provided no such documentation or legal
| guarantees, and it is still quite possible they scraped all
| sorts of copyright materials.
| devmor wrote:
| There is in fact, an extreme amount of circumstantial
| evidence that they intentionally and knowingly violated
| copyright en mass. It's been quite a popular subject in
| tech news the past couple weeks.
| Kuinox wrote:
| > OpenAI has provided no such documentation
|
| OpenAI and Shutterstocks publicly announced their
| collaboration, Shutterstocks sells AI generated images,
| generated with OpenAI models.
| luma wrote:
| Google scrapes copyrighted material every day and then
| presents that material to users in the form of excerpts,
| images, and entire book pages. This has been ruled OK by
| the courts. Scraping copyrighted information is not illegal
| or we couldn't have search engines.
| kevingadd wrote:
| Google is not presently selling "we trained an AI on
| people's art without permission, and you can type their
| name in along with a prompt to generate a knockoff of
| their art, and we charge you money for this". So it's not
| really a 1:1 comparison, since there are companies
| selling the thing I described right now.
| luma wrote:
| That pretty clearly would fall under transformative work.
| It is not illegal for a human to paint a painting in the
| style of, say, Banksy, and then sell the resulting
| painting.
| kevingadd wrote:
| Humans and AI are not the same thing, legally or
| physically. The law does not currently grant AI rights of
| any kind.
| luma wrote:
| If a human isn't violating the law when doing that thing,
| then how is the machine violating the law when it cannot
| even hold copyright itself?
| kevingadd wrote:
| I'm not sure how to explain this any clearer: Humans and
| machines are legally distinct. Machines don't have the
| rights that humans have.
| Ukv wrote:
| Fair Use is the relevant protection and is not specific
| to manual creation. Traditional algorithms (e.g: the
| snippets, caching, and thumbnailing done by search
| engines) are already covered by it.
| estebank wrote:
| In some locales sitting on the street writing down a list
| of people coming and going is legal, but leaving a camera
| pointed at the street isn't. Legislation like that makes
| a distinction between an action by a person (which has
| bounds on scalability) and mechanized actions (that do
| not).
| ufocia wrote:
| What's not prohibited is allowed, at least in the US.
| ufocia wrote:
| Scraping is only legal if it's temporary and
| transformational. If Google started selling the scrapped
| images it would be a different story.
| Kuinox wrote:
| What is not transformational for generative AI ?
| mesh wrote:
| No they are not. They train their models on Adobe Stock
| content. They do not train on user content.
|
| https://helpx.adobe.com/manage-account/using/machine-
| learnin...
|
| "The insights obtained through content analysis will not be
| used to re-create your content or lead to identifying any
| personal information."
|
| "For Adobe Firefly, the first model is trained on Adobe
| Stock images, openly licensed content, and public domain
| content where the copyright has expired."
|
| (I work for Adobe)
| KeplerBoy wrote:
| There is a small difference between any and all. OpenAI
| certainly didn't licence all of the image they use for
| training.
| jazzyjackson wrote:
| source for OpenAI paying anyone a dime? don't you think that
| would set a precedent that everyone else deserves their cut?
| popohauer wrote:
| It's going to be interesting to see how the lawsuits against
| OpenAI by content creators plays out. If the courts rule that
| AI generated content is a derivative work of all the content it
| was trained on it could really flip the entire gen AI movement
| on its head.
| luma wrote:
| If it were a derivative work[1] (and sufficiently
| transformational) then it's allowed under current copyright
| law and might not be the slam dunk ruling you were hoping
| for.
|
| [1] https://en.wikipedia.org/wiki/Derivative_work
| kevingadd wrote:
| "sufficiently transformational" is carrying a lot of water
| here. At minimum it would cloud the issue and might expose
| anyone using AI to lawsuits where they'd potentially have
| to defend each generated image.
| ufocia wrote:
| Sufficiently transformational only applies to
| copyrightability, but AI works are not copyrightable
| under current US law, so it's a non-issue.
| popohauer wrote:
| Oh, interesting, I didn't realize that's how it worked.
| Thanks for the additional context around this. Guess it's
| not as upending as I thought it could be.
| ufocia wrote:
| Not if it is AI generated. So far only humans can be
| original enough to warrant copyrights, at least in the US .
|
| BTW, the right to prepare derivative works belongs to the
| copyright holder of the reference work.
|
| I doubt that many AI works are in fact derivative works.
| Sure, some bear enough similarity, but a gross majority
| likely doesn't.
| torginus wrote:
| My biggest fear is that the big players will drop a few
| billion dollars to silence the copyright holders with power
| go away, and new rules are put in place that will make open-
| source models that can't do the same essentially illegal.
| BeFlatXIII wrote:
| ...then I'll keep enjoying my Stable Diffusion and pirated
| models.
| Kuinox wrote:
| > More specifically, we assume the attacker:
|
| > * can inject a small number of poison data (image/text pairs)
| to the model's training dataset
|
| I think thoes are bad assumption, labelling is more and more done
| by some labelling AI.
| popohauer wrote:
| I'm glad to see tools like Nightshade starting to pop up to
| protect the real life creativity of artists. I like AI art, but I
| do feel conflicted about its potential long term effects towards
| a society that no longer values authentic creativity.
| Minor49er wrote:
| Is the existence of the AI tool not itself a product of
| authentic creativity? Does eliminating barriers to image
| generation not facilitate authentic creativity?
| 23B1 wrote:
| No, it facilitates commoditization. Art - real art - is
| fundamentally a human-to-human transaction. Once everyone can
| fire perfectly-rendered perfectly-unique pieces of 'art' at
| each other, it'll just become like the internet is today:
| filled with extremely low-value noise.
|
| Enjoy the short term novelty while you can.
| fulladder wrote:
| This is the right prediction. Once machines can generate
| visual art, people will simply stop valuing it. We may see
| increased interest in other forms of art, e.g., live
| performance art like theater. It's hard to predict exactly
| how it'll play out, but once something becomes cheap to
| produce and widely available, it loses its luster for
| connoisseurs and then gradually loses its luster for
| everybody else too.
| BeFlatXIII wrote:
| > Art - real art - is fundamentally a human-to-human
| transaction.
|
| Why is this hippie nonsense so popular?
| 23B1 wrote:
| Because some things are different than others, even
| though they might have the same word to describe them.
| eddd-ddde wrote:
| Isn't this just teaching the models how to better understand
| pictures as humans do? As long as you feed them content that
| looks good to a human, wouldn't they improve in creating such
| content?
| lern_too_spel wrote:
| You would think the economists at UChicago would have told
| these researchers that their tool would achieve the opposite
| effect of what they intended, but here we are.
|
| In this case, the mechanism for how it would work is
| effectively useless. It doesn't affect OpenAI or other
| companies building foundation models. It only works on people
| fine-tuning these foundation models, and only if the image is
| glazed to affect the same foundation model.
| k__ wrote:
| How long will this work?
| kevingadd wrote:
| It's an arms race the bigger players will win, and it
| undermines the quality of the images. But it feels natural that
| artists would want to do _something_ since they don 't feel
| like anyone else is protecting them right now.
| devmor wrote:
| Baffling to see anyone argue against this technology when it is a
| non-issue to any model by simply acquiring only training data you
| have permission to use.
| krapp wrote:
| The reason people are arguing against this technology is that
| no one is using them in the way you describe. They actually
| wouldn't even be economically viable in that case.
| devmor wrote:
| If it is not economically viable for you to be ethical, then
| you do not deserve economic success.
|
| Anyone arguing against this technology following the line of
| reasoning you present is operating in adverse to the good of
| society. Especially if their only motive is economic
| viability.
| krapp wrote:
| I feel like you read my comment and interpreted it in
| exactly the opposite way it was intended because I agree
| with you, and you're making the same point I was trying to
| make.
| devmor wrote:
| You are right, I read it as a defense rather than an
| explanation.
| Ukv wrote:
| I think people 100% have the right to use this on their images,
| but:
|
| > simply acquiring only training data you have permission to
| use
|
| Currently it's generally infeasible to obtain licenses at the
| required scale.
|
| When attempting to develop a model that can describe photos for
| visually impaired users, I had even tried to reach out to
| obtain a license from Getty. They repeatedly told me that they
| don't license images for machine learning[0].
|
| I think it's easy to say "well too bad, it doesn't deserve to
| exist" if you're just thinking about DALL-E 3, but there's a
| huge number of positive and far less-controversial applications
| of machine learning that benefit from web-scale pretraining and
| foundation models - spam filtering, tumour segmentation, voice
| transcription, language translation, defect detection, etc.
|
| [0]: https://i.imgur.com/iER0BE2.png
| devmor wrote:
| I don't believe it's a "doesn't deserve to exist" situation,
| because these things genuinely can be used for the public
| good.
|
| However - and this is a big however - I don't believe it
| deserves the legal protection to be used for profit.
|
| I am of the opinion that if you train your model on data that
| you do not hold the rights for, your usage should be handled
| similarly to most fair use laws. It's fine to use it for your
| personal projects, for research and education, etc. but it is
| not OK to use it for commercial endeavors.
| Ukv wrote:
| > It's fine to use it for your personal projects, for
| research and education, etc. but it is not OK to use it for
| commercial endeavors.
|
| Say I train a machine vision model that, after having
| pretrained on ImageNet or similar, detects deformities in a
| material for a small company that manufactures that
| material. Do you not think that would be fair use, despite
| being commercial?
|
| To me it seems highly transformative (a defect detection
| model is entirely outside the original images' purposes)
| and does not at all impact the market of the works.
|
| Moreover, you said it was "Baffling to see anyone argue
| against this technology" but it seems there are at least
| some models (like if my above detector was non-commercial)
| that you're ethically okay with _and_ could be affected by
| this poisoning.
| devmor wrote:
| No, I don't generally think it's okay to profit off of
| the work of others without their consent.
|
| >Moreover, you said it was "Baffling to see anyone argue
| against this technology" but it seems there are at least
| some models (like if my above detector was non-
| commercial) that you're ethically okay with and could be
| affected by this poisoning.
|
| Just because I think there are situations where it's not
| ethically wrong to use someone's work without permission
| does not mean I think it's ethically wrong for someone to
| protect their work any way they see fit.
|
| To use an extreme example: I do not think it's wrong for
| a starving man to steal food. I also do not think it's
| wrong for someone to defend their food from being stolen,
| regardless of the morality of the thieves' motivation.
| notfed wrote:
| I don't know if asking permission of every copyright holder of
| every image on the Internet is as simple as you're implying.
| ultimoo wrote:
| would it have been that hard to include a sample photo and how it
| looks with the nightshade filter side by side in a 3 page
| document describing how it would look in great detail
| jamesu wrote:
| Long-term I think the real problem for artists will be
| corporations generating their own high quality targeted datasets
| from a cheap labor pool, completely outcompeting them by a
| landslide.
| ufocia wrote:
| It will democratize art.
| 23B1 wrote:
| then it won't be art anymore, it'll just be mountains of shit
|
| sorta like what the laptop did for writing
| jrflowers wrote:
| This is a good point. There hasn't been any writing since
| the release of the Gateway Solo in 1995
| sussmannbaka wrote:
| Art is already democratized. It has been for decades.
| Everyone can pick it up at zero cost. Even you!
|
| The poorest people have historically produced great art.
| Training a model, however? Expensive. Running it locally?
| Expensive. Paying the sub? Expensive.
|
| Nothing is being democratized, the only thing this does is
| devaluing the blood and sweat people have put into their work
| so FAANG can sell it to lazy suckers.
| jdietrich wrote:
| In the short-to-medium term, we're seeing huge improvements in
| the data efficiency of generative models. We haven't really
| started to see self-training in diffusion models, which could
| improve data efficiency by orders of magnitude. Current models
| are good at generalisation and are getting better at an
| incredible pace, so any efforts to limit the progress of AI by
| restricting access to training data is a speedbump rather than
| a roadblock.
| msp26 wrote:
| >Like Glaze, Nightshade is computed as a multi-objective
| optimization that minimizes visible changes to the original
| image.
|
| It's still noticeably visible.
| kevingadd wrote:
| Yeah, I've seen multiple artists complain about how glazing
| reduces image quality. It's very noticeable. That seems like an
| unavoidable problem given how AI is trained on images right
| now.
| dist-epoch wrote:
| Remember when the music industry tried to use technology to stop
| music pirating?
|
| This will work about as well...
|
| Oh, I forget, fighting music pirating was considered an evil
| thing to do on HN. "pirating is not stealing, is copyright
| infringement", right? Unlike training neural nets on internet
| content which of course is "stealing".
| kevingadd wrote:
| FWIW, you're the only use of the word "steal" in this comment
| thread.
|
| Many people would in fact argue that training AI on people's
| art without permission is copyright infringement, since the
| thing it (according to detractors) does is infringe copyright
| by generating knockoffs of people's work.
|
| You will see some people use the term "stealing" but they're
| usually referring to how these AIs are sold/operated by for-
| profit companies that want to make money off artists' work
| without compensating them. I think it's not unreasonable to
| call that "stealing" even if the legal definition doesn't
| necessarily fit 100%.
|
| The music industry is also not really a very good comparison
| point for independent artists... there is no Big Art equivalent
| that has a stranglehold on the legislature and judiciary like
| the RIAA/MPAA do.
| snakeyjake wrote:
| A more apt comparison is sampling.
|
| AI is sampling other's works.
|
| Musicians can and do sample. They also obtain clearance for
| commercial works, pay royalties if required, AND credit the
| samples if required.
|
| AI "art" does none of that.
| Minor49er wrote:
| Musicians overwhelmingly do not even attempt to clear
| samples. This also isn't a great comparison since samples are
| taken directly out of the audio, not turned into a part of a
| pattern used to generate new sounds like what AI generators
| do with images
| snakeyjake wrote:
| Commercial musicians do not?
|
| You sure about that?
|
| Entire legal firm empires have been built on the licensing,
| negotiations, and fees that make up the industry.
|
| I'm ain't talking about some dude on YouTube or Soundcloud.
| Few people care about some rando on Soundcloud. Those moles
| aren't big enough to whack. Vanilla Ice and MC Hammer were.
| OpenAI is as well.
|
| There's even a company that specializes in sample
| clearance: https://sampleclearance.com
|
| More info: https://www.soundonsound.com/sound-
| advice/sample-clearance
|
| Also:
|
| >not turned into a part of a pattern used to generate new
| sounds like what AI generators do with images
|
| This is demonstrably false. Multiple individuals have
| repeatedly been able to extract original images from AI
| generators.
|
| Here's one-- Extracting Training Data from Diffusion Models
| https://arxiv.org/abs/2301.13188
|
| Text, too: https://arxiv.org/abs/2311.17035
| xigoi wrote:
| The difference is that "pirating" is mostly done by individuals
| for private use, whereas training is mostly done by
| megacorporations looking to make more money.
| 542458 wrote:
| This seems to introduce levels of artifacts that many artists
| would find unacceptable:
| https://twitter.com/sini4ka111/status/1748378223291912567
|
| The rumblings I'm hearing are that this a) barely works with
| last-gen training processes b) does not work at all with more
| modern training processes (GPT-4V, LLaVA, even BLIP2 labelling
| [1]) and c) would not be especially challenging to mitigate
| against even should it become more effective and popular. The
| Authors' previous work, Glaze, also does not seem to be very
| effective despite dramatic proclamations to the contrary, so I
| think this might be a case of overhyping an academically
| interesting but real-world-impractical result.
|
| [1]: Courtesy of /u/b3sn0w on Reddit: https://imgur.com/cI7RLAq
| https://imgur.com/eqe3Dyn https://imgur.com/1BMASL4
| brucethemoose2 wrote:
| Yeah. At worst a simple img2img diffusion step would mitigate
| this, but just eyeballing the examples, traditional denoisers
| would probably do the job?
|
| Denoising is probably a good preprocessing step anyway.
| achileas wrote:
| It's a common preprocessing step and I believe that's how
| glaze (this lab's previous work) was defeated.
| gedy wrote:
| Maybe it's more about "protecting" images that artists want to
| publicly share to advertise work, but it's not appropriate for
| final digital media, etc.
| sesm wrote:
| In short, anti-AI watermark.
| johnnyanmac wrote:
| Yeah. It may mess with the artist's vision but the impact
| is still way more subtle than other methods used to protect
| against these unwanted actions.
|
| Of course I'm assuming it works to begin with. Sounds like
| a game of cat and mouse. And AI has a lot of rich cats.
| pimlottc wrote:
| I can't really see any difference in those images on the
| Twitter example when viewing it on mobile
| pxc wrote:
| I don't have great vision, but me neither. They're
| indistinguishable to me (likewise on mobile).
| Gigachad wrote:
| I was on desktop and it looks like pretty heavy jpeg
| compression. Doesn't completely destroy the image, but it's
| pretty noticeable when blown up large enough.
| milsorgen wrote:
| It took me a minute too but on the fast you can see some
| blocky artifacting by the elbow and a few spots elsewhere
| like curtain upper left.
| Keyframe wrote:
| look at the green drapes to the right, or any large uniform
| colored space. It looks similar to bad JPEG artifacts.
| 0xcde4c3db wrote:
| I didn't see it immediately either, but there's a _ton_ of
| added noise. The most noticeable bit for me was near the
| standing person 's bent elbow, but there's a lot more that
| becomes obvious when flipping back and forth between browser
| tabs instead of swiping on Twitter.
| vhcr wrote:
| The animation when you change images makes it harder to see
| the difference, I opened the three images each in its own tab
| and the differences are more apparent when you change between
| each other instantly.
| dontupvoteme wrote:
| One of the few times a 'blink comparator' feature in image
| viewers would be useful!
| SirMaster wrote:
| But that's not realistic?
|
| If you have to have both and instantly toggle between them
| to notice the difference, then it sounds like it's doing
| its job well and is hard to notice the difference.
| battles wrote:
| The person who drew it would definitely notice.
| bowsamic wrote:
| What kind of artist is not going to be bothered with
| seeing huge artifacting on their work? Btw for me it was
| immediately noticeable even on mobile
| charcircuit wrote:
| The gradient on the bat has blocks in it instead of being
| smooth.
| josefx wrote:
| Something similar to jpeg artifacts on any surface with a
| normally smooth color gradient, in some cases rather
| significant.
| jquery wrote:
| It's really noticeable on desktop, like compressing an 800kb
| jpeg to 50kb. Maybe on mobile you won't notice, but on
| desktop the image looks blown out.
| fenomas wrote:
| At full size it's _super_ obvious - I made a side-by-side:
|
| https://i.imgur.com/I6EQ05g.png
| trimethylpurine wrote:
| I still don't see a difference. (Mobile)
| fenomas wrote:
| Here's a maybe more mobile friendly comparison:
|
| https://i.imgur.com/zUVn8rt.png
|
| But now that I double-check, I was comparing with the
| images zoomed to 200%. On desktop the artifacts are also
| noticeable at 100%, but not nearly as bad as in my
| previous comment.
| bowsamic wrote:
| What phone are you using? It's extremely obvious on my
| iPhone
| Rewrap3643 wrote:
| Have you done a color blindness test before? Red-green is
| the most common type and the differences here are mostly
| shades of green.
| Detrytus wrote:
| Second picture looks like you were looking at it through
| a dirty window, there's lot of pale white stains, or
| light reflections, it's really blurry.
| GaryNumanVevo wrote:
| The artifacts are a non-issue. It's intended images with
| nightshade are intended to be silently scrapped and avoid human
| filtering.
| minimaxir wrote:
| The artifacts are extremely an issue for artists who don't
| want their images damaged for the possibility of them not
| being trained by AI.
|
| It's a bad tradeoff.
| GaryNumanVevo wrote:
| Nightshaded images aren't intended for portfolios. They're
| mean to be uploaded enmasse and scraped later.
| AJ007 wrote:
| To where? A place no one sees them and they aren't
| scraped?
| filleduchaos wrote:
| I think the point is that they're akin to a watermark.
|
| Even before the current AI boom, plenty of artists have
| wanted to _showcase_ their work /prove that it exists
| without necessarily making the highest quality original
| file public.
| Diti wrote:
| Most serious artists I know (at least in my community)
| release their high-quality images on Patreon or similar.
| pgeorgi wrote:
| For example in accounts on image sites that are exposed
| to suspected scrapers but not to others. Scrapers will
| still see the real data, but they'll also run into stuff
| designed to mix up the training process.
| the8472 wrote:
| do you mean scrapped or scraped?
| GaryNumanVevo wrote:
| scraped
| soulofmischief wrote:
| > The artifacts are a non-issue.
|
| According to which authority?
| kmeisthax wrote:
| The screenshots you sent in [1] are inference, not training.
| You need to get a Nightshaded image into the training set of an
| image generator in order for this to have any effect. When you
| give an image to GPT-4V, Stable Diffusion img2img, or anything
| else, you're not training the AI - the model is completely
| frozen and does not change at all[0].
|
| I don't know if anyone else is still scraping _new_ images into
| the generators. I 've heard somewhere that OpenAI stopped
| scraping around 2021 because they're worried about training on
| the output of their own models[1]. Adobe Firefly claims to have
| been trained on Adobe Stock images, but we don't know if Adobe
| has any particular cutoffs of their own[2].
|
| If you want an image that screws up inference - i.e. one that
| GPT-4V or Stable Diffusion will choke on - you want an
| adversarial image. I don't know if you can adversarially train
| on a model you don't have weights for, though I've heard you
| can generalize adversarial training against multiple
| independent models to _really_ screw shit up[3].
|
| [0] All learning capability of text generators come from the
| fact that they have a context window; but that only provides a
| short term memory of 2048 tokens. They have no other memory
| capability.
|
| [1] The scenario of what happens when you do this is fancifully
| called Habsburg AI. The model learns from it's own biases,
| reinforcing them into stronger biases, while forgetting
| everything else.
|
| [2] It'd be particularly ironic if the only thing Nightshade
| harms is the one AI generator that tried to be even slightly
| ethical.
|
| [3] At the extremes, these adversarial images fool humans.
| Though, the study that did this intentionally only showed the
| images for a small period of time, the idea being that short
| exposures are akin to a feed-forward neural network with no
| recurrent computation pathways. If you look at them longer,
| it's obvious that it's a picture of one thing edited to look
| like another.
| jerbear4328 wrote:
| [3] sounds really interesting - do you have a link?
| ittseta wrote:
| https://www.nature.com/articles/s41467-023-40499-0
| https://deepmind.google/discover/blog/images-altered-to-
| tric...
|
| Study on the Influence of Adversarial Images on Human
| Perception
| ptdn wrote:
| The context windows of LLMs are now significantly larger than
| 2048 tokens, and there are clever ways to autopopulate
| context window to remind it of things.
| scheeseman486 wrote:
| Hey you know what might not be AI generated post-2021? Almost
| everything run through Nightshade. So given it's defeated,
| which is pretty likely, artists have effectively tagged their
| own work for inclusion.
| hkt wrote:
| It is a great shame that we have come to a no-win situation
| for artists when VCs are virtually unable to lose.
| ToucanLoucan wrote:
| I mean that's more or less status quo isn't it? Big
| business does what it wants, common people can get fucked
| if they don't like it. Same as it ever was.
| hkt wrote:
| That's exactly right. It is just the variety of new ways
| in which common people get fucked that is dispiriting,
| with seemingly nothing capable of moving in the opposite
| direction.
| visarga wrote:
| Modern generative image models are trained on curated data,
| not raw internet data. Sometimes the captions are
| regenerated to fit the image better. Only high quality
| images with high quality descriptions.
| kmeisthax wrote:
| Why wouldn't an artist just generate AI spam and Nightshade
| it?
| webmaven wrote:
| Even if no new images are being scraped to train the
| foundation text-to-image models, you can be certain that
| there is a small horde of folk still scraping to create
| datasets for training fine-tuned models, LoRAs, Textual
| Inversions, and all the new hotness training methods still
| being created each day.
| KTibow wrote:
| Correct me if I'm wrong but I understand image generators as
| relying on auto-labeled images to understand what means what,
| and the point of this attack to make the auto-labelers
| mislabel the image, but as the top-level comment said it's
| seemingly not tricking newer auto-labelers.
| michaelbrave wrote:
| not all are auto labelled, some are hand labelled, some are
| initially labelled with something like clip/blip/booru and
| then corrected a bit by hand. The newest thing though is
| using llm's with image support like GPT4 to label the
| images, which kind of does a much better job most of the
| time.
|
| Your understanding of the attack was the same as mine, it
| injects just the right kinds of pixels to throw off the
| auto-labellers to misdirect what they are directing causing
| the tags to get shuffled around.
|
| Also on reddit today some of the Stable Diffusion users are
| already starting to train using Nightshade so they can
| implement it as a negative model, which might or might not
| work, will have to see.
| GaggiX wrote:
| If it doesn't work during inference I really doubt it will
| have any intended effect during training, there is simply too
| much signal and the added adversarial noise works on the
| frozen and small proxy model they used (CLIP image encoder I
| think) but it doesn't work on a larger model and trained on a
| different dataset, if there is any effect during training it
| will probably just be the model learning that it can't take
| shortcuts (the artifacts working on the proxy model showcase
| gaps in its visual knowledge).
|
| Generative models like text-to-image have an encoder part (it
| could be explicit or not) that extract the semantic from the
| noised image, if the auto-labelers can correctly label the
| samples then the encoded trained on both actual and
| adversarial images will learn to not take the same shortcuts
| that the proxy model has taken making the model more robust,
| I cannot see an argument where this should be a negative
| thing for the model.
| h0p3 wrote:
| Sir /u/b3nsn0w is courteous, `/nod`.
| TenJack wrote:
| Wonder if the AI companies are already so far ahead that they can
| use their AI to detect and avoid any poisoning?
| alentred wrote:
| With this "solution" it looks like the world of art enters the
| cat-and-mouse game the ad blockers were playing for the last
| decade or two.
| isodev wrote:
| I just tested it with Azure AI image classification and it
| worked - so this cat is yet to adapt to the mouse's latest
| idea.
|
| I still feel it is absolutely wrong to roam around the internet
| and scrape images (without consent) in order to power one's
| cash cow AI. I hope more methods to protect artworks (including
| audio and other formats) become more accessible.
| HKH2 wrote:
| Artists copy from each other all the time. Arguably, culture
| exists because of copying (folk stories by necessity);
| copyright makes culture top-down and stagnant, and you can't
| avoid it because they have the money to shove it right in
| your face. Who wants trickle-down culture?
| blibble wrote:
| it's not an artist, it's a piece of software
|
| in the same way bittorrent or gzip is
| HKH2 wrote:
| Sure. The person using it has intent. Now we have come to
| a point in which intent alone is art. Let there be light.
| KTibow wrote:
| I might be missing something because I don't know much about
| the architecture of either Nightshade or AI art generators, but
| I wonder if you could try to have a GAN-like architecture (an
| extra model trying to trick the model) for the part of the
| generator that labels images to build resistance to Nightshade-
| like filters.
| the8472 wrote:
| It doesn't even have to be a full GAN, you only need to train
| the discriminator side to filter out the data. Clean
| reference images + Nightshade would be the generator side.
| ukuina wrote:
| Won't a simple downsample->upsample be the antidote?
| wizzwizz4 wrote:
| How do you train your upsampler? (Also: why are you seeking to
| provide an "antidote"?)
| spookie wrote:
| Why would you train one?
| MrNeon wrote:
| >why are you seeking to provide an "antidote"
|
| To train a model on the data.
| krapp wrote:
| Get permission to use the data.
| MrNeon wrote:
| Got all the permission I need when it was put on a
| publicly accessible server.
| wizzwizz4 wrote:
| That's not really how consent works.
|
| I hope this is a special exception you've made, rather
| than your general approach towards interacting with your
| fellows.
| MrNeon wrote:
| That is how consent works.
| CatWChainsaw wrote:
| If my body is in a public space you have the right to
| force me to have sex with you, I guess? That's your logic
| after all.
| MrNeon wrote:
| Data you put up on the internet is not your body.
|
| Do I really have to explain this? You know I don't. Do
| better.
| xigoi wrote:
| That's not how copyright works.
| MrNeon wrote:
| Tell me where it says training a model is infringing on
| copyright.
| xigoi wrote:
| How is creating a derivative of someone's work and
| selling it not copyright infringement?
| MrNeon wrote:
| Who said anything about creating a derivative? Surely you
| don't mean to say that any image created with a model
| trained on copyrighted data counts as a derivative of it.
| Edit: Or worse, that the model itself is derivative,
| something so different from an image must count as
| transformative work!.
|
| Also who said anything about selling?
| xigoi wrote:
| The model itself is a derivative. And it's not really
| that transformative, it's basically the input data
| compressed with highly lossy compression.
|
| > Also who said anything about selling?
|
| All the corporations that are offering AI as a paid
| service?
| MrNeon wrote:
| > it's basically the input data compressed with highly
| lossy compression.
|
| Okay, extract the images from a Stable Diffusion
| checkpoint then. I'll wait.
|
| It's not like lossy compression CAN'T be fair use or
| transformative. I'm sure you can imagine how that is
| possible given the many ways an image can be processed.
|
| > All the corporations that are offering AI as a paid
| service?
|
| Am I them?
| klyrs wrote:
| > why are you seeking to provide an "antidote"
|
| I think it's worthwhile for such discussion to happen in the
| open. If the tool can be defeated through simple means, it's
| better for everybody to know that, right?
| wizzwizz4 wrote:
| It would be better for _artists_ to know that. But Hacker
| News is not a forum of visual artists: it 's a forum of
| hackers, salaried programmers, and venture capitalists.
| Telling the bad guys about vulnerabilities isn't
| responsible disclosure.
|
| Causing car crashes isn't hard (https://xkcd.com/1958/).
| That doesn't mean Car Crash(tm) International(r)'s
| decision-makers know how to do it: they probably don't even
| know what considerations go into traffic engineering, or
| how anyone can just buy road paint from that shop over
| there.
|
| It's everybody's responsibility to keep Car Crash(tm)
| International(r) from existing; but failing that, it's
| everybody's responsibility to not tell them how to cause
| car crashes.
| MrNeon wrote:
| The tears of artists and copyright evangelists is so
| sweet.
| ukuina wrote:
| I apologize. I was trying to respond to inflammatory language
| ("poison") with similarly hyperbolic terms, and I should know
| better than to do that.
|
| Let me rephrase: Would AI-powered upscaling/downscaling (not
| a simple deterministic mathematical scaling) not defeat this
| at a conceptual level?
| jdiff wrote:
| No, it's resistant to transformation. Rotation, cropping,
| scaling, the image remains poisonous. The only antidote known
| currently is active artist cooperation.
| CaptainFever wrote:
| Or Img2Img.
| xg15 wrote:
| I wonder how this tool works if it's actually model independent.
| My understanding so far was that in principle each possible model
| has _some_ set of pathological inputs for which the
| classification will be different than what a user sees - but that
| this set is basically different for each model. So did they
| actually manage to build an "universal" poison? If yes, how?
| peter_d_sherman wrote:
| To protect an individual's image property rights from image
| generating AI's -- wouldn't it be simpler for the IETF (or other
| standards-producing group) to simply create an
|
| _AI image exclusion standard_
|
| , similar to _" robots.txt"_ -- which would tell an AI data-
| gathering web crawler that a given image or set of images -- was
| off-limits for use as data?
|
| https://en.wikipedia.org/wiki/Robots.txt
|
| https://www.ietf.org/
| potatolicious wrote:
| Entities training models have no incentive to follow such
| metadata. If we accept the premise that "more input -> better
| models" then there's every reason to ignore non-legally-binding
| metadata requests.
|
| Robots.txt survived because the use of it to gatekeep valuable
| goodies was never widespread. Most sites _want_ to be indexed,
| most URLs excluded by the robots file are not of interest to
| the search engine anyway, and use of robots to prevent crawling
| actually interesting pages is marginal.
|
| If there was ever genuine uptake in using robots to gatekeep
| the _really good stuff_ search engines would 've stopped
| respecting it pretty much immediately - it isn't legally
| binding after all.
| peter_d_sherman wrote:
| >Entities training models have no incentive to follow such
| metadata. If we accept the premise that "more input -> better
| models" then there's every reason to ignore non-legally-
| binding metadata requests.
|
| Name two entities that were asked to stop using a given
| individuals' images that failed to stop using them after the
| stop request was issued.
|
| >Robots.txt survived because the use of it to gatekeep
| valuable goodies was never widespread. Most sites want to be
| indexed, most URLs excluded by the robots file are not of
| interest to the search engine anyway, and use of robots to
| prevent crawling actually interesting pages is marginal.
|
| Robots.txt survived because it was a "digital signpost" a
| "digital sign" -- sort of like the way you might put a
| "Private Property -- No Trespassing" sign in your yard.
|
| Most moral/ethical/lawful people -- will obey that sign.
|
| Some might not.
|
| But the some that might not -- probably constitute about a
| 0.000001% minority of the population, whereas the majority
| that do -- probably constitute about 99.99999% of the
| population.
|
| "Robots.txt" is a sign -- much like a road sign is.
|
| People can obey them -- or they can ignore them -- but they
| can ignore them only at their own peril!
|
| It's a sign which provides a hint for what the right thing to
| do in a certain set of circumstances -- which is what the
| _Law_ is; which is what the majority of _Laws_ are.
|
| People can obey them -- or they can choose to ignore them --
| but _only at their own peril!_
|
| Most will choose to obey them. Most will choose to "take the
| hint", proverbially speaking!
|
| A few might not -- but that doesn't mean the majority won't!
|
| >If there was ever genuine uptake in using robots to gatekeep
| the really good stuff search engines would've stopped
| respecting it pretty much immediately - it isn't legally
| binding after all.
|
| Again, _name two entities that were asked to stop using a
| given individuals ' images that failed to stop using them
| after the stop request was issued._
| xg15 wrote:
| And then what? The scrapers themselves already happily ignore
| copyright, they won't be inclined to obey a no-ai.txt. So
| someone would have to enforce the standard. Currently I see no
| organisation who would be willing to do this or even just
| technologically able - as even just detecting such scrapers is
| an extremely hard task.
|
| Nevertheless, I hope that at some not-so-far point in the
| future there will be more legal guidance about this kind of
| stuff, i.e. it will be made clear that scraping violates
| copyright. This still won't solve the problem of detectability
| but it would at least increase the risk of scrapers, _should_
| they be caught.
| peter_d_sherman wrote:
| >The scrapers themselves already happily ignore copyright,
| they won't be inclined to obey a no-ai.txt.
|
| Name two entities that were asked to stop using a given
| individuals' images that failed to stop using them after the
| stop request was issued.
|
| >Currently I see no organisation who would be willing to do
| this or even just technologically able - as even just
| detecting such scrapers is an extremely hard task.
|
| // Part of Image Web Scraper For AI Image Generator ingestion
| psuedocode:
|
| if fileExists("no-ai.txt") { // Abort image
| scraping for this site -- move on to the next site
|
| } else { // Continue image scraping for this
| site
|
| };
|
| See? Nice and simple!
|
| Also -- let me ask you this -- what happens to the
| intellectual property (or just plain property) rights of
| Images on the web _after_ the author dies? Or say, 50 years
| (or whatever the legal copyright timeout is) after the author
| dies?
|
| Legal grey area perhaps?
|
| Also -- what about Images that exist in other legal
| jurisdictions -- i.e., other countries?
|
| How do we know what set of laws are to apply to a given
| image?
|
| ?
|
| Point is: If you're going to endorse and/or construct a legal
| framework (and have it be binding -- keep in mind you're
| going to have to traverse the legal jurisdictions of many
| countries, _many countries_!) -- you might as well consider
| such issues.
|
| Also -- at least in the United States, we have Juries that
| can override any Law (Separation of Powers) -- that is, that
| which is considered "legally binding" -- may not be quite so
| "legally binding" if/when properly explained to a proper jury
| in light of extenuating (or just plain other) circumstances!
|
| So kindly think of these issues prior to making all-
| encompasing proposals as to what you think should be "legally
| binding" or not.
|
| I comprehend that you are just trying to solve a problem; I
| comprehend and empathize; but the problem might be a bit
| greater than you think, and there might be one if not
| serveral unexplored partial/better (since no one solution,
| legal or otherwise, will be all-encompassing) solutions --
| because the problem is so large in scope -- but all of these
| issues must be considered in parallel -- or errors, present
| or future will occur...
| xg15 wrote:
| > _Part of Image Web Scraper For AI Image Generator
| ingestion psuedocode:..._
|
| Yes, and who is supposed to run that code?
|
| > _Name two entities that were asked to stop using a given
| individuals ' images that failed to stop using them after
| the stop request was issued._
|
| Github? OpenAI?[1] Stable Diffusion?[2] LAION?[3] What do
| you think why there are currently multiple high-profile
| lawsuits ongoing about exactly that topic?
|
| Besides, that's not how things work. Training a foundation
| model takes months and currently costs a fortune in
| hardware and power - and once the model is trained, there
| is, as of now, no way to remove individual images from the
| model without restraining. So in practical terms it's
| impossible to remove an image if it has already been
| trained on.
|
| So the better question would be, name two entities who have
| ignored an artist's request to not include their image when
| they encountered it the first time. It's still a trick
| question though because the point is that scraping happens
| in private - we can't know which images were scraped
| without access to the training data. The one indication
| that it was probably scraped is if a model manages to
| reproduce it verbatim - which is the basis for some of the
| above lawsuits.
|
| [1] https://www.theverge.com/2022/11/8/23446821/microsoft-
| openai...
|
| [2] https://www.theverge.com/2023/2/6/23587393/ai-art-
| copyright-...
|
| [3] https://www.heise.de/hintergrund/Stock-photographer-
| sues-AI-...
| GaggiX wrote:
| These methods like Glaze usually works by taking the original
| image chaging the style or content and then apply LPIPS loss on
| an image encoder, the hope is that if they can deceive a CLIP
| image encoder it would confuse also other models with different
| architecture, size and dataset, while changing the original image
| as little as possible so it's not too noticeable to a human eye.
| To be honest I don't think it's a very robust technique, with
| this one they claim that a model instead of seeing for example a
| cow on grass the model will see a handbag, if someone has access
| to GPT4-V I want to see if it's able to deceive actually big
| image encoders (usually more aligned to the human vision).
|
| EDIT: I have seen a few examples with GPT-4 V and how I imagine
| it wasn't deceived, I doubt this technique can have any impact on
| the quality of the models, the only impact that this could
| potentially have honestly is to make the training more robust.
| garg wrote:
| Each time there is an update to training algorithms and in
| response poisoning algorithms, artists will have to re-glaze, re-
| mist, and re-nightshade all their images?
|
| Eventually I assume the poisoning artifacts introduced in the
| images will be very visible to humans as well.
| brucethemoose2 wrote:
| What the article doesn't illustrate is that it destroys fine
| detail in the image, even in the thumbnails of the reference
| paper: https://arxiv.org/pdf/2310.13828.pdf
|
| Also... Maybe I am naive, but it seems rather trivial to work
| around with a quick prefilter? I don't know if tradition
| denoising would be enough, but worst case you could run img2img
| diffusion.
|
| reply
| GaryNumanVevo wrote:
| The poisoned images aren't intended to be viewed, rather
| scraped and pass a basic human screen. You wouldn't be able to
| denoise as you'd have to denoise the entire dataset, the entire
| point is that these are virtually undetectable from typical
| training set examples, but they can push prompt frequencies
| around at will with a small number of poisoned examples.
| minimaxir wrote:
| > You wouldn't be able to denoise as you'd have to denoise
| the entire dataset
|
| Doing that requires much less compute than training a large
| generative image model.
| GaryNumanVevo wrote:
| > the entire point is that these are virtually undetectable
| from typical training set examples
|
| I'll repeat this point for clarity. After going over the
| paper again, denoising shouldn't affect this attack, it's
| the ability of plausible images to not be detected by human
| or AI discriminators (yet)
| brucethemoose2 wrote:
| I guess the idea is that the model trainers are ignorant of
| this and wouldn't know to preprocess/wouldn't bother?
|
| That's actually quite plausible.
| BugsJustFindMe wrote:
| > _I guess the idea is that the model trainers are
| ignorant of this_
|
| Maybe they're ignorant of it right up until you announce
| it, but then they're no longer ignorant of it.
| brucethemoose2 wrote:
| Right, but they aren't necessarily paying attention to
| this.
|
| I am not trying to belittle foundational model trainers,
| but _a lot_ goes on in ML land. Even groups can 't track
| every development.
| enord wrote:
| I'm completely flabbergasted by the number of comments implying
| copyright concepts such as "fair use" or "derivative work" apply
| to trained ML models. Copyright is for _people_, as are the
| entailing rights, responsibilities and exemptions. This has gone
| far beyond anthropomorphising and we need to like get it
| together, man!
| ronsor wrote:
| You act like computers and ML models aren't just tools used by
| people.
| enord wrote:
| What did I write to give you that impression?
| Ukv wrote:
| My initial interpretation was that you're saying fair use
| is irrelevant to the situation because machine learning
| models aren't themselves legal persons. But, fair use
| doesn't solely apply to manual creation - use of
| traditional algorithms (e.g: the snippets, caching, and
| thumbnailing done by search engines) is still covered by
| fair use. To my understanding, that's why ronsor pointed
| out that ML models are tools used by people (and those
| people can give a fair use defense).
|
| Possibly you instead meant that fair use is relevant, but
| people are wording remarks in a way that suggests the model
| itself is giving a fair use defence to copyright
| infringement, rather than the persons training or using it?
| enord wrote:
| Well then I could have been much clearer because I meant
| something like the latter.
|
| An ML model can neither have nor be in breach of
| copyright so any discussion about how it works, and how
| that relates to how people work or "learn" is besides the
| point.
|
| What actually matters is firstly details about collation
| of source material, and later the particular legal
| details surrounding attribution. The last part involves
| breaking new ground legally speaking and IANAL so I will
| reserve judgement. The first part, collation of source
| material for training is emphatically _not_ unexplored
| legal or moral territory. People are acting like none of
| the established processes apply in the case of LLMs and
| handwave about "learning" to defend it.
| Ukv wrote:
| > and how that relates to how people work or "learn" is
| besides the point
|
| It is important (for the training and generation stages)
| to distinguish between whether the model copies the
| original works or merely infers information from them -
| as copyright does not protect against the latter.
|
| > The first part, collation of source material for
| training is emphatically not unexplored legal or moral
| territory.
|
| Similar to as in Authors Guild v. Google, Inc. where
| Google internally made entire copies of millions of in-
| copyright books:
|
| > > While Google makes an unauthorized digital copy of
| the entire book, it does not reveal that digital copy to
| the public. The copy is made to enable the search
| functions to reveal limited, important information about
| the books. With respect to the search function, Google
| satisfies the third factor test
|
| Or in the ongoing Thomson Reuters v. Ross Intelligence
| case where the latter used the former's legal headnotes
| for training a language model:
|
| > > verbatim intermediate copying has consistently been
| upheld as fair use if the copy is "not reveal[ed] to the
| public."
|
| That it's an internal transient copy is not inherently a
| free pass, but it is something the courts take into
| consideration, as mentioned more explicitly in Sega v.
| Accolade:
|
| > > Accolade, a commercial competitor of Sega, engaged in
| wholesale copying of Sega's copyrighted code as a
| preliminary step in the development of a competing
| product [yet] where the ultimate (as opposed to direct)
| use is as limited as it was here, the factor is of very
| little weight
|
| And, given training a machine learning model is a
| considerably different purpose than what the images were
| originally intended for, it's likely to be considered
| transformative; as in Campbell v. Acuff-Rose Music:
|
| > > The more transformative the new work, the less will
| be the significance of other factors
| enord wrote:
| Listen, most website and book-authors want to be indexed
| by google. It brings potential audience their way, so
| most don't make use of their _right_ to be de-listed. For
| these models, there is no plausible benefit to the
| original creators, and so one has to argue they have _no_
| such right to be "de-listed" in order to get any training
| data currently under copyright.
| Ukv wrote:
| > It brings potential audience their way, so most don't
| make use of their _right_ to be de-listed.
|
| The Authors Guild lawsuit against Google Books ended in a
| 2015 ruling that Google Books is fair use and as such
| they _don 't_ have a right to be de-listed. It's not the
| case that they have a right to be de-listed but choose
| not to make use of it.
|
| The same would apply if collation of data for machine
| learning datasets is found to be fair use.
|
| > one has to argue they have _no_ such right to be "de-
| listed" in order to get any training data currently under
| copyright.
|
| Datasets I'm aware of already have respected machine-
| readable opt-outs, so if that were to be legally enforced
| (as it is by the EU's DSM Directive for commercial data
| mining) I don't think it'd be the end of the world.
|
| There's a lot of power in a default; the set of
| "everything minus opted-out content" will be
| significantly bigger than "nothing plus opted-in content"
| even with the same opinions.
| enord wrote:
| With the caveat that I was exactly wrong about the books
| de-listing, I feel you are making my point for me and
| retreating to a more pragmatic position about defaults.
|
| The (quite entertaining) saga of Nightshade tells a story
| about what is going to be content creators "default
| position" going forward and everyone else will follow.
| You would be a fool not to, the AI companies are trying
| to end run you, using your own content, and make a profit
| without compensating you and leave you with no recourse.
| Ukv wrote:
| > I feel you are making my point for me and retreating to
| a more pragmatic position about defaults
|
| I'm unclear on what stance I've supposedly retreated
| from. My position is that an opt-out is not necessary
| under current US law, but that it wouldn't be the worst-
| case outcome if new regulation were introduced to mandate
| it.
|
| > The (quite entertaining) saga of Nightshade tells a
| story about what is going to be content creators "default
| position" going forward and everyone else will follow
|
| By "default" I refer not to the most common choice, but
| to the outcome that results from inaction. There's a bias
| towards this default even if the majority of
| rightsholders do opt to use Nightshade (which I think is
| unlikely).
| CaptainFever wrote:
| No one is saying a model is the legal entity. The legal
| entities are still people and corporations.
| enord wrote:
| Oh come on, you're being insincere. Wether or not the model
| is learning from the work just like people is hotly debated
| _as if it would make a difference_. Fair use is even brought
| up. Fair use! Even if it applied, these training sets collate
| _all of everything_
|
| I feel like I'm taking crazy pills TBQH
| tigrezno wrote:
| Do not fight the AI, it's a lost cause, embrace it.
| gweinberg wrote:
| For this to work, wouldn't you have to have an enormous number of
| artists collaborating on "poisoning" their images the same way
| (cow to handbag) while somehow keeping it secret form ai trainers
| that they were doing this? It seems to me that even if the
| technology works perfectly as intended, you're effectively just
| mislabeling a tiny fraction of the training data.
| ang_cire wrote:
| Setting aside the efficacy of this tool, I would be very
| interested in the legal implications of putting designs in your
| art that could corrupt ML models.
|
| For instance, if I set traps in my home which hurt an intruder we
| are both guilty of crimes (traps are illegal and are never
| considered self defense, B&E is illegal).
|
| Would I be responsible for corrupting the AI operator's data if I
| intentionally include adversarial artifacts to corrupt models, or
| is that just DRM to legally protect my art from infringement?
|
| edit:
|
| I replied to someone else, but this is probably good context:
|
| DRM is legally allowed to disable or even corrupt the software or
| media that it is protecting, if it detects misuse.
|
| If an adversarial-AI tool attacks the model, it then becomes a
| question of whether the model, having now incorporated my
| protected art, is now "mine" to disable/corrupt, or whether it is
| in fact out of bounds of DRM.
|
| So for instance, a court could say that the adversarial-AI
| methods could only actively prevent the training software from
| incorporating the protected media into a model, but could not
| corrupt the model itself.
| anigbrowl wrote:
| None whatsoever. There is no right to good data for model
| training, nor does any contractual relationship exist between
| you and and a model builder who scrapes your website.
| ang_cire wrote:
| If you're assuming this is open-shut, you're wrong. I asked
| this specifically as someone who works in security. A court
| is going to have to decide where the line is between DRM and
| malware in adversarial-AI tools.
| ufocia wrote:
| Worth trying but I doubt it unless we establish a right to
| train.
| anigbrowl wrote:
| I'm not. Malware is one thin, passive data poisoning is
| another. Mapmakers have long used such devices to
| detect/deter unwanted copying. In the US such 'trap
| streets' are not protected by copyright, but nor do they
| generate liability.
|
| https://en.wikipedia.org/wiki/Trap_street
| ang_cire wrote:
| A trap street doesn't damage other data. Not even
| remotely useful as an analogy. That's to allow detection
| of copies, not to corrupt the copies from being useable.
| kortilla wrote:
| That's like asking if lying on a forum is illegal
| ang_cire wrote:
| No, it's much closer to (in fact, it is simply) asking if
| adversarial AI tools count as DRM or as malware. And a court
| is going to have to decide whether the model and or its
| output counts as separate software, which it is illegal for
| DRM to intentionally attack.
|
| DRM can, for instance, disable its own parent tool (e.g. a
| video game) if it detects misuse, but it can't attack the
| host computer or other software on that computer.
|
| So is the model or its output, having been trained on my art,
| a byproduct of my art, in which case I have a legal right to
| 'disable' it, or is it separate software that I don't have a
| right to corrupt?
| danShumway wrote:
| > asking if adversarial AI tools count as DRM or as malware
|
| Neither. Nightshade is not DRM or malware, it's "lying"
| about the contents of an image.
|
| Arguably, Nightshade does not corrupt or disable the model
| at all. It feeds it bad data that leads the model to
| generate incorrect conclusions or patterns about how to
| generate images. This is assuming it works, which we'll
| have to wait and see, I'm not taking that as a given.
|
| But the only "corruption" happening here is that the model
| is being fed data that it "trusts" without verifying that
| what the data is "telling" it is correct. It's not
| disabling the model or crashing it, the model is forming
| incorrect conclusions and patterns about how to generate
| the image. If Google translate asked you to rate its
| performance on a task, and you gave it an incorrect rating
| from what you actually thought its performance was, is that
| DRM? Malware? Have you disabled Google translate by giving
| it bad feedback?
|
| I don't think the framing of this as either DRM or malware
| is correct. This is bad training data. Assuming it works,
| it works because it's bad training data -- that's why
| ingesting one or two images doesn't affect models but
| ingesting a lot of images does, because training a model on
| bad data leads the model to perform worse if and only if
| there is enough of that bad data. And so what we're really
| talking about here is not a question of DRM or malware,
| it's a question of whether or not artists have a legal
| obligation to make their data useful for training -- and of
| course they don't. The implications of saying that they did
| would be enormous, it would imply that any time you
| knowingly lied about a question that was being fed into an
| AI training set that doing so was illegal.
| GaryNumanVevo wrote:
| How would that situation be remotely related?
| CaptainFever wrote:
| Japan is considering it, I think?
| https://news.ycombinator.com/item?id=38615280
| npteljes wrote:
| I see it as no different than mapmakers inventing a nonexistent
| alley, to check who copies their maps verbatim ("trap street").
| Even if this caused, for example, a car crash because of an
| autonomous driver, the onus I think would be on the one that
| made the car and used the stolen map for navigation, and not on
| the one that created the original map.
|
| https://en.wikipedia.org/wiki/Trap_street
| danShumway wrote:
| The way Nightshade works (assuming it does work) is by
| confusing the features of different tags with each other. To
| argue that this is illegal would be to argue that mistagging a
| piece of artwork on a gallery is illegal.
|
| If you upload a picture of a dog to DeviantArt and you label it
| as a cat, and a model ingests that image and starts to think
| that cats look like dogs, would anybody claim that you are
| breaking a law? If you upload bad code to Github that has bugs,
| and an AI model consumes that code and then reproduces the
| bugs, would anyone argue that uploading badly written code to
| Github is a crime?
|
| What if you uploaded some bad code to Github and then wrote a
| comment at the top of the code explaining what the error was,
| because you knew that the model would ignore that comment and
| would still look at the bad code. Then would you be committing
| a crime by putting that code on Github?
|
| Even if it could be proven that your _intention_ was for that
| code or that mistagged image to be unhelpful to training, it
| would still be a huge leap to say that either of those
| activities were criminal -- I would hope that the majority of
| HN would see that as a dangerous legal road to travel down.
| etchalon wrote:
| My hope is these type of "poisoning tools" become ubiquitous for
| all content types on the web, forcing AI companies to, you know,
| license things.
| mjfl wrote:
| Another way would be, for every 1 piece of art you make, post 10
| AI generated arts, so that the SNR is really bad.
| Duanemclemore wrote:
| For visual artists who don't want visible artifacting in the art
| they feature online, would it be possible to upload these
| alongside your un-poisoned art, but have them only hanging out in
| the background? So say having one proper copy and a hundred
| poisoned copies in the same server, but only showing the un-
| poisoned one?
|
| Might this "flood the zone" approach also have -some- efficacy
| against human copycats?
| marcinzm wrote:
| This feels like it'll actually help make AI models better versus
| worse once they train on these images. Artists are basically, for
| free, creating training data that conveys what types of noise
| does not change the intended meaning of the image to the artist
| themselves.
| Albert931 wrote:
| Artist are now fully dependent on Software Engineers for
| protecting the future of their career lol
| zirgs wrote:
| Does it survive AI upscaling or img2img? If not - then it's
| useless. Nobody trains AI models without any preprocessing. This
| is basically a tool for 2022.
| r3trohack3r wrote:
| The number of people who are going to be able to produce high
| fidelity art with off the shelf tools in the near future is
| unbelievable.
|
| It's pretty exciting.
|
| Being able to find a mix of styles you like and apply them to new
| subjects to make your own unique, personalized, artwork sounds
| like a wickedly cool power to give to billions of people.
| __loam wrote:
| And we only had to alienate millions of people from their labor
| to do it.
| DennisAleynikov wrote:
| Yeah, sadly those millions of people don't matter in the
| grand scheme of things and were never going to profit off
| their work long term
| r3trohack3r wrote:
| What a bummer of a thing to say.
|
| Those millions/billions of people matter a great deal.
| DennisAleynikov wrote:
| They matter but not under the current system. Artists are
| a rarely paid profession, and there are professional
| artists out there but there's now a huge amount of people
| that will never contact an artist for work that used to
| only be human powered. It's not personal for me. I
| understand that desire to resist the inevitable but it's
| here now.
|
| For what it's worth I never use midjourney or dalle or
| any of the commercial closed systems that steal from
| artists but I know I can't stop the masses from going
| there and inputting "give me pretty picture in style x"
| __loam wrote:
| Resistance is important imo. If this happens and we, who
| work in this industry, say nothing, what good are we.
| It's only inevitable if it's socially acceptable.
| mensetmanusman wrote:
| Is this utilitarianism?
| r3trohack3r wrote:
| Absolutely agree we should allow people to accumulate equity
| through effective allocation of their labor.
|
| And I also agree that we shouldn't build systems that
| alienate people from that accumulated equity.
| BeFlatXIII wrote:
| Worth it.
| 23B1 wrote:
| It'll be about as wickedly tool as the ability to get on the
| internet, e.g. commoditized, transactional, and boring.
| sebzim4500 wrote:
| I know this is an unpopular thing to say these days, but I
| still think the internet is amazing.
|
| I have more access to information now than the most powerful
| people in the world did 40 years ago. I can learn about
| quantum field theory, about which pop star is allegedly
| fucking which other pop star, etc.
|
| If I don't care about the law I can read any of 25 million
| books or 100 million scientific papers all available on
| Anna's Archive for free in seconds.
| r3trohack3r wrote:
| As Jeff Bezos recently said on the Lex podcast: one of the
| greatest compliments you can give an inventor is that
| they're invention will be taken for granted by future
| generations.
|
| "It won't be any more wickedly cool than the internet" -
| saying something won't be any more wickedly cool than the
| most profound and impactful pieces of infrastructure human
| civilization has erected is a pretty high compliment.
| kredd wrote:
| In terms of art, population tends to put value not on the
| result, but origin and process. People will just look down on
| any art that's AI generated in a couple of years when it
| becomes ubiquitous.
| MacsHeadroom wrote:
| Nope, but I already look down on artists who refuse to
| integrate generative AI into their processes.
| mplewis wrote:
| Can you share some of the art you've made with generative
| AI?
| jurynulifcation wrote:
| Cool, who are you?
| MisterBastahrd wrote:
| People who use generative AI in their processes are not
| artists.
| password54321 wrote:
| This is true. They are just taking a sample from a
| generated latent space, just like taking a photo of
| something doesn't make you an artist.
| blacklion wrote:
| So, there is no artists in, for example, street
| photography? Picture must be altered to become art, or
| staged?
|
| Was it irony? :)
| password54321 wrote:
| They are photographers. Here is the definition of an
| artist so you can have better clarity on what an artist
| is:
|
| "A person who creates art (such as painting, sculpture,
| music, or writing) using conscious skill and creative
| imagination"
| aqfamnzc wrote:
| I took gp as satire. But maybe not haha.
| blacklion wrote:
| And people who use Photoshop are?
|
| There is somewhat famous digital artist from Russia -
| Alexey Andreev. Google it, he has very distinctive style
| of realistic technique and surrealistic situations, like
| landing big manta ray on the deck of aircraft carrier. Or
| you can see his old works in his 5-years-not-updates LJ
| [1].
|
| Now he uses generative AI as one of his tools. As
| Photoshop, as different (unrealistic!) brushes in
| Photoshop, as other digital tools. His style is still
| 100% recognizable and his works don't become worse or
| more "generic". Is he still artist? I think so.
|
| Where will you draw the line?
|
| [1] - https://alexandreev.livejournal.com/
| davely wrote:
| I use generative AI to rubber duck and help improve my
| code.
|
| Am I no longer a software engineer?
| smackeyacky wrote:
| I don't think this is quite right. I think paraphrasing
| The Incredibles has a better take:
|
| _When everybody is an artist, then nobody will be one._
| redwall_hp wrote:
| This is already the case. Art is a process, a form of human
| expression, not an end result.
|
| I'm sure OpenAI's models can shit out an approximation of a
| new Terry Pratchett or Douglas Adams novel, but nobody with
| any level of literary appreciation would give a damn unless
| fraud was committed to trick readers into buying it. It's not
| the author's work, and there's no human message behind it.
| Aerroon wrote:
| Novels aren't about a message. They're entertainment. If
| the novel is entertaining then it's irrelevant whether
| there is or isn't a message in it. Besides, literature
| enthusiasts will invent a message for a popular story even
| if there never was one.
|
| Also, I'm sure that you can eventually just prompt the
| model with the message you want to put into the story, if
| you can't already do that.
| portaouflop wrote:
| I haven't read anything "shit out" by any LLM that even
| nearly approaches the level of quality by the authors you
| named -- would very much like to see something like that -
| do you have any evidence for your claims?
|
| AFAICT current text generation is something approaching bad
| mimicry at best and downright abysmal in general. I think
| you still need a very skilled author and meaty brain with a
| story to tell to make use of an LLM for storytelling. Sure
| it's a useful tool that will make authors more effective
| but we are far from the point where you tell the LLM "write
| a story set in Pratchetts Discworld" and something
| acceptable or even entertaining will be spit out - if such
| a thing can even be achieved.
| torginus wrote:
| Thing is there are way more _good_ books written, than any
| single person can consume in their lifetimes. An average
| person like me, reading a mixed diet of classics, obscure
| recommendations and what 's popular right now, I still
| don't feel like I'm making a dent in the pile of high
| quality written content.
|
| Given all that, the purpose of LLMs should be to create
| tailor made content to everyone's tastes. However, it seems
| the hardcore guardrails put into GPT4 and Claude prevent it
| from generating anything enjoyable. It seems, even the plot
| of the average Star Wars movie is too spicy for modern LLM
| sensibilities, never mind something like Stephen King.
| petesergeant wrote:
| > population tends to put value not on the result, but origin
| and process
|
| I think population tends to value "looks pretty", and it's
| other artists, connoisseurs, and art critics who value origin
| and process. Exit Through the Gift Shop sums this up nicely
| Theodores wrote:
| https://en.wikipedia.org/wiki/Labor_theory_of_value
|
| According to Marx, value is only created with human labour.
| This is not just a Marxist theory, it is an observation.
|
| There may be lots of over-priced junk that makes you want to
| question this idea. But let's not nit-pick on that.
|
| In two years time people will not see any value in AI art,
| quite correctly because there is not much human labour in
| creating it.
| mesh wrote:
| In two years time, no one will know what was created with
| AI, what was created by humans, or what was created by
| both.
| Gormo wrote:
| > According to Marx, value is only created with human
| labour. This is not just a Marxist theory, it is an
| observation.
|
| And yet it's completely and absolutely wrong. Value is
| created by the subjective utility offered to the consumer,
| irrespective of what inputs created the thing conveying
| that utility.
| jquery wrote:
| Labor theory of value is quite controversial, many
| economists call it tautological or even metaphysical. I
| also don't really see what LTV has to say about AI art, if
| anything, except that the economic value generated by AI
| art should be distributed to everybody and not just
| funneled to a few capitalists at the top. I would agree
| with that. It's true that more jobs get created even as
| jobs are destroyed, but it's also true that just as our
| ancestors fought for a 40 hour work week and a social
| safety net, we should be able to ask for more as computers
| become ever so productive.
| petesergeant wrote:
| > This is not just a Marxist theory, it is an observation.
|
| Yeah? Well, you know, that's just like uh, your opinion,
| man
| Aerroon wrote:
| I disagree. I definitely value modern digital art more than
| most historical art, because it just looks better. If AI art
| looks better (and in some cases it does) then I'll prefer
| that.
| kredd wrote:
| That's totally fine, everyone's definition of art is
| subjective. But general value of an art as a piece will
| just still be zero for AI generated ones, just like any
| IKEA / Amazon print piece. You just pay for the "looks
| pretty", frame and paper.
| Aerroon wrote:
| > _You just pay for the "looks pretty", frame and paper._
|
| But you pay that for any piece of art though? You
| appreciate it because you like what it looks like. The
| utility of it is in how good it looks, it's not how much
| effort was put into it.
|
| If you need a ditch you're not going to value the ditch
| more if the worker dug it by hand instead of using an
| excavator. You value it based on the utility it provides
| you.
| kredd wrote:
| That analogy doesn't work for art, since worker's ditch
| is result based. There are no feelings like "i like this
| ditch", "experience of a ditch" or "i'm curious how this
| ditch was dug".
|
| Again, i'm not saying buying a mass made AI art will be
| wrong. Just personally speaking, it will never evoke any
| feelings other than "looks neat" for me. So its inherent
| "art value" is close to 0 as I can guess its history is
| basically someone put in a prompt and sent it to print
| (which I can do myself on my phone too!). It's the same
| as looking at cool building pics on my phone (0 art
| value) versus actually seeing them in person (non-0),
| mostly because the feelings I get from it. That being
| said, if it makes others happy, it's not my place to
| judge.
| falcolas wrote:
| > Being able to find a mix of styles you like and apply them to
| new subjects to make your own unique, personalized, artwork
| sounds like a wickedly cool power to give to billions of
| people.
|
| And in the process, they will obviate the need for Nightshade
| and similar tools.
|
| AI models ingesting AI generated content does the work of
| destroying the models all by itself. Have a look at "Model
| Collapse" in relation to generative AI.
| password54321 wrote:
| Not really. There is a reason why we find realistic painting to
| be more fascinating than a photo and why some still practice
| it. The effort put in by another artist does affect our
| enjoyment.
| wruza wrote:
| For me it doesn't. I'm generating images, realistic, 2.5d, 2d
| and I like them as much. I don't feel (or miss) what you
| described. Or what any other arts guy describes, for that
| matter. Arts people are different, because they were trained
| to feel something a normal person wouldn't. And that's okay,
| a normal person without training wouldn't see how much beauty
| and effort there is in an algorithm or a legal contract as
| well.
| dartharva wrote:
| The word "we" is doing a lot of heavy lifting here. A large
| majority of consumers can't even tell apart AI-generated from
| handmade, let alone care who or what made the thing.
| password54321 wrote:
| Yeah, that's just information you made up on the spot.
| efitz wrote:
| This is the DRM problem again.
|
| However much we might wish that it was not true, ideas are not
| rivalrous. If you share an idea with another person, they now
| have that idea too.
|
| If you share words on paper, then someone with eyes and a brain
| might memorize them (or much more likely, just grasp and retain
| the ideas conveyed in the words).
|
| If you let someone hear your music, then the ideas (phrasing,
| style, melody, etc) in that music are transferred.
|
| If you let people see a visual work, then the stylistic and
| content elements of that work are potentially absorbed by the
| audience.
|
| We have copyright to protect specific embodiments, but mostly if
| you try to share ideas with others without letting them use the
| ideas you shared, then you are in for a life of frustration and
| escalating arms race.
|
| I completely sympathize with anyone who had a great idea and
| spent a lot of effort to realize it. If I invented/created
| something awesome I would be hurt and angry if someone "copied"
| it. But the hard cold reality is that you cannot "own" an idea.
| freeAgent wrote:
| This doesn't stop anyone from viewing or scraping the work,
| though, so in no way is it DRM. It just causes certain methods
| of computer interpretation of an image to interpret it in an
| odd way vs. human viewers. They can still learn from them.
| avhon1 wrote:
| It absolutely is DRM, just a different form than media
| encryption. It's a purely-digital mechanism of enforcing
| rights.
| freeAgent wrote:
| It doesn't enforce any rights. It modifies the actual
| image. Humans and computers still have equal, open access
| to it.
| efitz wrote:
| It's designed to restrict the purposes for which the
| consumer can use the work. It is exactly like DRM in this
| way.
| freeAgent wrote:
| How does it stop you from using an image however you
| want?
| freeAgent wrote:
| To be clear, you can still train AI with these images.
| Nothing is stopping you.
| johnnyanmac wrote:
| To quote the source:
|
| >Nightshade's goal is not to break models, but to
| increase the cost of training on unlicensed data, such
| that licensing images from their creators becomes a
| viable alternative.
|
| Which feels similar to DRM. To discourage extraction of
| assets.
| freeAgent wrote:
| It also degrades the quality of the image for human
| consumers. It's just a matter of what someone is willing
| to publish to "the public."
| johnnyanmac wrote:
| Sure. Just like how video game drm impacts performance
| and watermarks on images degrades the image. Drm walks a
| tight line that inevitably makes the result worse than a
| drum-free solution, but also should not make the item
| completely unconsumable.
| freeAgent wrote:
| Video game DRM completely prevents people without a
| license/key to unlock it from accessing the game at all.
| johnnyanmac wrote:
| So, do you want to define drm by intent or technical
| implementation? I'm doing the former, but it sounds like
| you want to do the latter. Also keep in mind that
| legalese doesn't necessarily care about the exact
| encryption technique to be deployed either.
| freeAgent wrote:
| Both. Changing an image is done all the time prior to
| publishing them. In fact, no image you ever see on the
| internet is a raw sensor output. They are all modified in
| some manner. The images processed using this method look
| the same to every person and computer that views them.
| That's very different from DRM which encrypts things and
| prevents access to unprivileged users.
|
| This is effectively the equivalent of someone doing
| really crappy image processing. As other commenters have
| mentioned, it does alter how images look to humans as
| well as machines, and it can be "mitigated" through
| additional processing techniques.
| johnnyanmac wrote:
| >That's very different from DRM which encrypts things and
| prevents access to unprivileged users.
|
| Well you can call it a captcha if you want. The point
| here is to make it harder to access for bots (but not
| impossible) while inconveniencing honest actors in the
| process. It doesn't sound like there's a straightforward
| answer to "are captchas DRM" either.
| renewiltord wrote:
| That's true of almost all DRM, isn't it? Even for the
| most annoying form that is always-online DRM, everyone is
| provided the same access to the bytes that form a game.
| You and I have the same bytes of game.
|
| It's the purpose of some of those bytes that turns it
| into DRM.
| freeAgent wrote:
| No, it's not the same. The game is non-functional without
| the proper keys/authorization whereas images run through
| this algorithm are still images that anyone and any
| computer can view in the same manner without any
| authentication.
| aspenmayer wrote:
| An analogy that springs to mind is the difference between
| an access control mechanism such as a door with lock and
| key versus whatever magical contrivance that prevents
| entry to a dwelling by vampires uninvited.
| freeAgent wrote:
| That may be an ok analogy, but magic isn't real. Maybe a
| better analogy would be to speak or write in a language
| that you know certain people don't natively understand,
| and using lots of slang and idioms that don't translate
| easily. Someone could still run it through Google
| Translate or whatever, but they won't get a great
| understanding of what you actually said. They'd have to
| actually learn the language and the sorts of slang and
| idioms used.
| aspenmayer wrote:
| I agree that the analogy is strained. My goal was in
| elucidating the distinction between:
|
| the goals of artists and the developers of OP,
|
| versus
|
| the goals of AI engineers,
|
| and how it seems to me similar to the is/ought
| disctinction.
|
| In my original analogy, it's generally considered lawful
| to have a lock on your door, or not to do so, and the
| issue of a lock or lack thereof is moot when one is
| invited to enter, just as it is lawful to enter a
| premises when invited or during exigent circumstances,
| such as breaching entry to render lifesaving aid by
| emergency services or firefighters.
|
| By that same token, no amount of locks or other barriers
| to entry will prevent ingress by a vampire once invited
| inside.
|
| To me, much of the ballyhoo about OP seems like much ado
| about big cats eating faces, like a person publicly
| decrying the rise in vampire attacks after inviting that
| same vampire inside for dinner. It's a nonstarter.
|
| Copyright law is broken, because of the way that the law
| is written as much as the way that it's enforced, and
| also broken because of the way that humans are. Ideas are
| not copyrightable, and while historically their
| implementations or representations were, going forward,
| neither implementations nor representations are likely to
| receive meaningful/effective protections from copyright
| itself, but only from legal enforcement of copyright law.
|
| After the recent expiry of Disney's copyright on
| Steamboat Willie, the outpouring of praise, support,
| excitement, and original work from creators shows me that
| copyright law in its current incarnation doesn't perform
| its stated goals of promoting the creation of arts and
| sciences, and so should be changed, and in the meantime
| ignored and actively disobeyed, as any unjust law ought
| to be, regardless of what the law is or does.
|
| In light of our obligation to disobey unjust laws, I
| applaud efforts like OP to advance the state of the art
| in computer science, while at the same time encouraging
| others working on AI to actively circumvent such efforts
| for the selfsame reason.
|
| I similarly encourage artists of all kinds to make art
| for art's sake while monetizing however they see fit,
| without appealing to red herrings like the legality or
| lack thereof of end users appreciating their art and
| incorporating it into their own artistic output, however
| they may choose to do so.
|
| Like all art, code and its outputs is also First
| Amendment protected free speech.
| tsujamin wrote:
| Being able to fairly monetise your creative work and put food
| on the table is a _bit_ rivalrous though, don't you think?
| efitz wrote:
| No, I disagree. There is no principle of the universe or
| across human civilizations that says that you have a right to
| eat because you produced a creative work.
|
| The way societies work is that the members of the society
| contribute and benefit in prescribed ways. Societies with
| lots of excess production may at times choose to allow
| creative works to be monetized. Societies without much
| surplus are extremely unlikely to do so, eg a society with
| not enough food for everyone to eat in the middle of a famine
| is extremely unlikely to feed people who only create art;
| those people will have to contribute in some other way.
|
| I think it is a very modern western idea (less than a century
| old) that _many_ artists can dedicate themselves solely to
| producing the art they want to produce. In all other times
| artists either had day jobs or worked on commission.
| jrflowers wrote:
| > There is no principle of the universe or across human
| civilizations
|
| Can you list the principles across human civilizations?
| juunpp wrote:
| He can also presumably list the principles of the
| universe.
| yjftsjthsd-h wrote:
| That doesn't follow; you can say an item is not in a set
| without writing out every member of that set. What
| principle do you claim exists to contradict that claim?
| jrflowers wrote:
| > you can say an item is not in a set without writing out
| every member of that set
|
| Of course you can. Anyone can say anything.
|
| Is "keeping the list of principles a secret" a principle
| like the rules of Fight Club? It is not unreasonable to
| ask for a link or summary of this immutable set of ground
| truths.
|
| > What principle do you claim exists to contradict that
| claim?
|
| I could not answer this question without being able to
| double check. The only principle that comes to mind is
| the principle of ligma
| johnnyanmac wrote:
| > There is no principle of the universe or across human
| civilizations that says that you have a right to eat
| because you produced a creative work.
|
| What does that have to do with rivalry? This doesn't
| dispute the idea that AI is indeed competing with artists.
| You're just saying artists don't deserve to get paid.
|
| Regardless, some artists will give up but some will simply
| be more careful with where and how they post their art with
| tools like these. AI doesn't have a right to the artist's
| images neither.
| jazzyjackson wrote:
| I used to identify as a copyright abolitionist (I really
| love Nina Paley's TED talk, "copyright is brain damage")
| but the more I look at the history of it I see the
| compromises of interests, copyright is there so art is
| not locked up between artists and their patrons.
| renewiltord wrote:
| The tragedy of "your business model is not my problem" as a
| spreading idea is that while you're right since distribution
| is where the money is (not creation), intellectual property
| is de-facto weakened today and IP piracy is widely considered
| an acceptable thing.
| ikmckenz wrote:
| No, rivalrous has a specific meaning
| https://en.wikipedia.org/wiki/Rivalry_(economics)
| bsza wrote:
| So is sabotaging solutions that would make creative work of
| the same (or superior) quality more affordable. Your ability
| to produce expensive illustrations hinders my ability to
| produce cheap textbooks.
| throwoutway wrote:
| I don't see the parallel between this offensive tool and DRM. I
| could, say buy a perpetual license to an image from the artist,
| so that I can print it and put it on my wall, while it can
| simultaneously be poisonous to an AI system. I can even steal
| it and print it, while it is still poisonous to an AI system.
|
| The closest parallel I can think of is that humans can ingest
| chocolate but dogs should not.
| efitz wrote:
| A huge amount of DRM effort has been spent in the
| watermarking area, which is similar, but not exactly the
| same.
| jdietrich wrote:
| What you've described is the literal, dictionary definition
| of Digital Rights Management - a technology to restrict the
| use of a digital asset beyond the contractually-agreed terms.
| Copying is only one of many uses that the copyright-holder
| may wish to prevent. The regional lockout on a DVD had
| nothing to do with copy-protection, but it was still DRM.
| gwbas1c wrote:
| It's about the arm's race: DRM will always be cracked (with a
| sufficiently motivated customer.) AI poisoning will always be
| cracked (with a sufficiently motivated crawler.)
| xpe wrote:
| Many terms of art from economics are probably not widely-known
| here.
|
| > In economics, a good is said to be rivalrous or a rival if
| its consumption by one consumer prevents simultaneous
| consumption by other consumers, or if consumption by one party
| reduces the ability of another party to consume it. -
| Wikipedia: Rivalry (economics)
|
| Also: we should recognize that stating something as rivalrous
| or not is _descriptive_ (what exists) not _normative_ (what
| should be).
| fastball wrote:
| I think ideas being rivalrous is intrinsic, and therefore
| descriptive and normative.
| xpe wrote:
| I'm either not understanding you or disagreeing. You seem
| to be saying that something _should be_ because it _is_?
| Saying that would be rather silly, as in "Electrons should
| repel each other because they repel each other." Not to
| mention that this claim runs amok of the naturalistic
| fallacy. So what are you driving at?
| xpe wrote:
| > But the hard cold reality is that you cannot "own" an idea.
|
| The above comment is true about the properties of information,
| as explained via the lens of economics. [1]
|
| However, one ignores ownership as defined by various systems
| (including the rule of law and social conventions) at one's own
| peril. Such systems can also present a "hard cold reality" that
| can bankrupt or ostracize you.
|
| [1] Don't let the apparent confidence and technicality of the
| language of economists fool you. Economics isn't the only game
| in town. There are other ways to model and frame the world.
|
| [2] Dangling footnote warning. I think it is instructive to
| recognize that the field of economics has historically shown a
| kind of inferiority complex w.r.t. physics. Some economists
| ascribe to the level of rigor found in physics and that is well
| and good, but perhaps that effort should not be taken too
| seriously nor too far, since economics as a field operates at a
| different level. IMO, it would be wise for more in the field to
| eat a slice of humble pie.
|
| [3] Ibid. It is well-known that economists can be "hired guns"
| used to "prove" a wide variety of things, many of which are
| subjective. My point: you can hire an economist to shore up
| one's political proposals. Is the same true of physicists?
| Hopefully not to the same degree. Perhaps there are some cases
| of hucksterism, but nothing like the history of economists-
| wagging-the-dog! At some point, the electron tunnels or it does
| not.
| meowkit wrote:
| There are other games in town.
|
| But whatever game gives the most predictive power is going to
| win.
| xpe wrote:
| There is no need to frame this as "winning versus losing"
| regarding the many models that we draw upon.
|
| Even when talking about various kinds of scientific and
| engineering fields, predictive power isn't the only
| criteria, much less the best. Sometimes the simpler, less
| accurate models work well enough with less informational
| and computational cost.
|
| Even if we focus on prediction (as opposed to say
| statistical inference), often people want some kind of
| hybrid. Perhaps a blend of satisficing with limited
| information, scoped action spaces, and bounded computation;
| i.e. good enough given the information we have to make the
| decisions we can actuate with some computational budget.
| sfifs wrote:
| By that metric, various economic schools have been
| hilariously inept and would get classified not dissimilar
| to various schools of religious theology with their own
| dogmas. It's only in the last 15 years or so that some
| focus on empiricism and explaining reality rather than
| building theoretical castles in the air is coming about and
| is still far from mainstream.
| xpe wrote:
| > ... you cannot "own" an idea.
|
| Let's talk about ownership in a broader sense. In practice, one
| cannot effectively own (retain possession of) something without
| some combination of physical capability or coercion (or threat
| of coercion). Meaning: maintaining ownership of anything
| (physical or otherwise) often depends on the rule of law.
| thomastjeffery wrote:
| Then let's use a more precise term that is also present in
| law: monopoly.
|
| You can't monopolize an idea.
|
| Copyright law is a prescription, not a description. Copyright
| law _demands_ that everyone play along with the lie that is
| intellectual monopoly. The effectiveness of that demand
| depends on how well it can be enforced.
|
| Playing pretend during the age of the printing press may have
| been easy enough to coordinate, but it's practically
| impossible here in the digital age.
|
| If we were to increase enforcement to the point of
| effectiveness, then what society would be left to
| participate? Surely not a society I am keen to be a part of.
| xpe wrote:
| Trying to make sense of the above comment is difficult.
|
| > Copyright law demands that everyone play along with the
| lie that is intellectual monopoly.
|
| Saying "lie" suggests willful deception. Perhaps you mean
| "socially constructed"? Combined with "playing pretend"
| makes it read a bit like a rant.
|
| > Then let's use a more precise term that is also present
| in law: monopoly.
|
| Ok, in law and economics, the core idea of monopoly has to
| do with dominant market power that crowds out the existence
| of others. But your other uses of "monopoly" don't match
| that. For example, you talk about ideas and "intellectual
| monopoly". What do you mean?
|
| It seems like some of your uses of "monopoly" are not about
| markets but instead are closer to the idea of retaining
| sole ownership.
|
| > If we were to increase enforcement to the point of
| effectiveness, then what society would be left to
| participate? Surely not a society I am keen to be a part
| of.
|
| It appears you've already presupposed how things would play
| out, but I'm not convinced. What is your metric of
| effectiveness? A scale is better than some arbitrary
| threshold.
|
| Have you compared copyright laws and enforcement of the
| U.S. versus others?
|
| How far would you go: would you say that i.e. society would
| be better off without copyright law? By what standard?
| jazzyjackson wrote:
| > Saying "lie" suggests willful deception.
|
| Consider instead the term "legal fiction", it's not so
| derogatory.
| xpe wrote:
| _Legal fiction_ is a technical term used by legal
| scholars. To be clear, any legal system is built from
| legal constructs; I 'm not talking about these. A _legal
| fiction_ has a markedly different meaning than _lie_ as
| used in the comment two levels above (which seems to me
| more like a rant than a clear or convincing argument) Are
| you familiar with specific expert writings about claimed
| legal fictions in U.S. copyright law?
| jazzyjackson wrote:
| thanks for the correction
|
| i listened to one podcast on corporate personhood and
| intuited that intellectual property was similar but i see
| what you mean
| thomastjeffery wrote:
| The best attempt at thought monopoly I can think of is
| religion. Even that is a general failure: no single
| religious narrative has ever stood constant. They have
| all evolved through the participation of their adherents.
| There is no one true Christianity: there are _hundreds_.
|
| I most certainly do mean to call out copyright as
| willful, but it's not a deception, at least not a
| successful one: everyone knows it is false. That's why
| it's enforced by law! Instead of people being deceived,
| people must instead _pretend to be so_. Each of us must
| behave as if the very concept of Micky Mouse is immortal
| and immutable; and if we don 't, the law will punish
| accordingly.
|
| Every film on Netflix, every song on Spotify, etc. _can
| obviously be copied_ any number of times by any number of
| people at any place on Earth. We are all acutely aware of
| this fact, but copyright tells us, "Pretend you can't,
| or get prosecuted."
|
| So is it truly effective? Millions of people are _not_
| playing along. Millions of artists are honestly trying to
| participate in this market, and the market is failing
| them. Is that because we need more people to play along?
| Rightsholders like the MPAA say that piracy is _theft_ ,
| and that every copy that isn't paid for is a direct
| _cost_ to their business. How many of us are truly
| willing to pretend _that far_?
|
| What if we all just stopped? Would art suddenly be
| unprofitable for everyone, including the lucky few who
| turn a profit today? I don't believe that for a second.
|
| The only argument I have ever heard in favor of copyright
| is this: Every artist deserves a living. I have seen time
| and time again real living artists _fail_ to earn a
| living from their copyright. I have seen time and time
| again real living artists share their work _for free_ ,
| choosing to make their living by more stable means.
|
| Every person living deserves a living. Fix that, and we
| will fix the problem copyright pretends to solve, and
| more.
| xpe wrote:
| > I most certainly do mean to call out copyright as
| willful, but it's not a deception, at least not a
| successful one: everyone knows it is false.
|
| Unless I'm misunderstanding you, this is not even wrong.
| What about copyright law is empirically false? Such a
| question is non-sensical.
|
| Your comment redefines the word "false" in a way that
| muddles understanding. You aren't alone -- some
| philosophers do this -- but it tends to confuse rather
| than clarify. I've developed antibodies for language
| abuse of this kind. Such language can even have the
| effect of making language charged and divisive.
|
| Many people understand the value of using the words
| _true_ and _false_ to apply to the assessment of _facts_.
| This is a useful convention. (To be clear, I'm not
| opposed to bending language when it is useful.)
|
| To give a usage example: a misguided law is not _false_.
| Such a statement is non-sensical. We have clear phrases
| for this kind of law, such _poorly designed_ , _having
| unintended consequences_ , etc. We could go further and
| say that a law is i.e. _immoral_ or _pointless_. You are
| likely making those kinds of claims. By using those
| phrases, we can have a high-bandwidth conversation much
| more quickly.
| thomastjeffery wrote:
| The very concept that a thing cannot be copied. That is
| the obvious falsehood that we are compelled to pretend is
| true.
|
| I don't see how any of that is muddled. I'm being as
| direct as I can with my words here. I'm talking about the
| very plain fact that art _can be_ copied freely.
|
| For example, DRM software gives you encrypted content
| _and_ the decryption key. Why bother? Because the end
| user is _expected to pretend_ they are only able to use
| that decryption key _once_. This is patently false, but
| any user who decides not to play along is immediately
| labeled a "pirate". What vessel have they commandeered?
| The metaphorical _right_ to copy. What will be enshrined
| in law next, the rights to hear and to see?
|
| Copyright law _is_ poorly designed. It _does_ have
| unintended consequences. It _is_ immoral and pointless.
| To back these claims, all I must to do is show the
| absurdity that copyright is _on the face of it_.
| xpe wrote:
| [delayed]
| xpe wrote:
| > Every film on Netflix, every song on Spotify, etc. can
| obviously be copied any number of times by any number of
| people at any place on Earth. We are all acutely aware of
| this fact, but copyright tells us, "Pretend you can't, or
| get prosecuted."
|
| I see the dark arts of rhetoric used here, and it is
| shameful. The portion I quoted above is incredibly
| confused. I would almost call it a straw man, but it is
| worse than that.
|
| Copyright law says no such thing. Of course you _could_
| copy something. Copyright law exists precisely because
| you can do that. The law says i.e. "if you break
| copyright law, you will be at risk of a sufficiently
| motivated prosecutor."
| xpe wrote:
| The comment above suffers from much rhetoric to serve as
| a good jumping off point. For those interested, I would
| recommend the following two articles:
|
| ## "Rhetoric and Reality in Copyright Law" by Stewart E.
| Sterk
|
| Benjamin N. Cardozo School of Law. https://repository.law
| .umich.edu/cgi/viewcontent.cgi?article...
|
| > Why give authors an exclusive right to their writings?
| Copyright rhetoric generally offers two answers. The
| first is instrumental: copyright provides an incentive
| for authors to create and disseminate works of social
| value. By giving authors a monopoly over their works,
| copyright corrects for the underincentive to create that
| might result if free riders were permitted to share in
| the value created by an author's efforts. The second
| answer is desert: copyright rewards authors, who simply
| deserve recompense for their contributions whether or not
| recompense would induce them to engage in creative
| activity.
|
| > The rhetoric evokes sympathetic images of the author at
| work. The instrumental justification for copyright paints
| a picture of an author struggling to avoid abandoning his
| calling in order to feed his family. By contrast, the
| desert justification conjures up a genius irrevocably
| committed to his work, resigned - or oblivious - to
| living conditions not commensurate with his social
| contributions. The two images have a common thread:
| extending the scope of copyright protection relieves the
| author's plight.
|
| > Indeed, the same rhetoric* - emphasizing both
| incentives and desert - consistently has been invoked to
| justify two centuries of copyright expansion.
| Unfortunately, however, the rhetoric captures only a
| small slice of contemporary copyright reality. Although
| some copyright protection indeed may be necessary to
| induce creative activity, copyright doctrine now extends
| well beyond the contours of the instrumental
| justification. ...
|
| ## "Copyright Nonconsequentialism" by David McGowan
|
| Missouri Law Review. https://scholarship.law.missouri.edu
| /cgi/viewcontent.cgi?art...
|
| > This Article explores the foundations of copyright law.
| It tries to explain why those who debate copyright often
| seem to talk past each other. I contend the problem is
| that copyright scholars pay too much attention to
| instrumental arguments, which are often indeterminate,
| and too little to the first principles that affect how
| one approaches copyright law.
|
| > Most arguments about copyright law use instrumental
| language to make consequentialist arguments. It is common
| for scholars to contend one or another rule will advance
| or impede innovation, the efficient allocation and
| production of expression, personal autonomy, consumer
| welfare, the "robustness" of public debate, and so on.'
| Most of these instrumental arguments, though not quite
| all of them, reduce to propositions that cannot be tested
| or rejected empirically. Such propositions therefore
| cannot explain existing doctrine or the positions taken
| in debate.
|
| > These positions vary widely. Consumer advocates favor
| broad fair use rights and narrow liability standards for
| contributory infiringement; producer advocates favor the
| reverse.' Most of the arguments for both consumers and
| producers prove too much. It is easy to say that the
| right to exclude is needed to provide incentives for
| authors. It is hard to show that any particular rules
| provide optimal incentives. It is easy to point to
| deviations from the model of perfect competition. It is
| hard to show why these deviations imply particular rules.
|
| > ...
| juunpp wrote:
| You don't copyright ideas, you copyright works. And these
| artists' productions are works, not abstract ideas, with
| copyrights, and they are being violated. This is simple law.
| Why do people have such a hard time with this? Are you the one
| training the models and you need to find a cognitive escape out
| of the illegality and wrong-doing of your activities?
| theragra wrote:
| If it were true, then we wouldn't have that great difference
| in opinions on this topic.
| juunpp wrote:
| That GP is utterly confused about copyright law is not an
| opinion.
| sircastor wrote:
| The United States Supreme Court rulings are supported
| literally by opinions of the justices.
| sebzim4500 wrote:
| That may be the law, although we are probably years of legal
| proceedings away from finding out.
|
| It obviously is not "simple law".
| rlt wrote:
| It's not obvious to me that using a copyrighted image to
| train a model is copyright infringement. It's certainly not
| copyright infringement when used to train a human who may end
| up creating works that are influenced by (but not copies of)
| the original works.
|
| Now, if the original copyrighted work can be extracted or
| reproduced from the model, that's obviously copyright
| infringement.
|
| OpenAI etc should ensure they don't do that.
| Andrex wrote:
| Reproduced to what fidelity? 100%?
|
| If OpenAI's output reproduces a copyrighted image with one
| pixel changed, is that valid in your view? Where does the
| line end?
|
| Copyrighted material should never be used for nonacademic
| language models. "Garbage in, garbage out." All results are
| tainted.
|
| "But being forced to use non-copyrighted works will only
| slow things down!"
|
| Maybe that's a good thing, too. Copyright is something
| every industry has to accept and deal with -- LLMs don't
| get a "cool tech, do whatever" get-out-of-jail free card.
| pawelmurias wrote:
| Copyright is decided by the courts, it's a legal thing
| not some a biological. If the courts decide it's legal it
| will be.
| Andrex wrote:
| I'm totally down for the courts handling this
| AI/copyright mess, but I don't think technologists are
| going to like the results.
|
| By virtue of the fact that it _is_ "fuzzy" and open to
| interpretation, we're going to see lawsuits, the
| resulting chilling effects of those lawsuits will blunt
| US tech firms from the practice of ingesting large
| amounts of copywritten material without a second thought.
| US tech firms will be giving it a second, third, fourth,
| etc. thought once the lawsuits start.
|
| It's gonna be like submarine patents on steroids.
|
| Like I said, I'm down for letting the courts decide. But
| AI supporters should probably avoid kicking the hornets'
| nests regarding copyright.
| rlt wrote:
| > Reproduced to what fidelity? 100%?
|
| Whatever the standard is for humans doing the exact same
| thing.
| thereisnospork wrote:
| >Now, if the original copyrighted work can be extracted or
| reproduced from the model, that's obviously copyright
| infringement.
|
| I think there's an important distinction to be made here -
| "can" be reproduced isn't infringement, only actual
| reproduction is (and degrees thereof not consisting of
| sufficiently transformative or fair use).
|
| Trivially a typewriter can reproduce a copyrighted book.
| Less trivially Google books, with iirc stores the full text
| of copywrited works has been judged to be legal.
| huytersd wrote:
| Nothing is being reproduced. Just the ideas being reused.
| sircastor wrote:
| >This is simple law. Why do people have such a hard time with
| this?
|
| Because this isn't simple law. It feels like simple
| infringement, but there's no actual copying going on. You
| can't open up the database and find a given duplicate of a
| work. Instead you have some abstraction of what it takes to
| get to a given work.
|
| Also it's important to point out that nothing in the law is
| sure. A good lawyer, a sympathetic judge, a
| bored/interested/contrarian juror, etc can render "settled
| law" unsettled in an instant. The law is not a set of board
| game rules.
| flkiwi wrote:
| If the AI were a human and that human made an image that
| copied substantial elements of another human's creative
| work after a careful review of the original creator's work,
| even if it was not an original copy and no archival copy
| was stored somewhere in the second creator's creative
| space, I would be concerned about copyright infringement
| exposure if I were the second (copying) creator.
|
| I'm open to the idea that copyright law might need to
| change, but it doesn't seem controversial to note that
| scraping actual creative works to extract elements for an
| algorithm to generate new works crosses a number of
| worrying lines.
| jazzyjackson wrote:
| Have you seen the examples of midjourney reproducing exact
| frames of Dune, Star Wars etc? With vague prompting not
| asking for the media property specifically. It's pretty
| close to querying a database, except if you're asking for
| something that's not there it's able to render an
| interpolated result on the fly. Ask it for something that
| _is_ there however and the model will dutifully pull it up.
| sircastor wrote:
| Looking for it, I found this [1] which describes almost
| what you're saying. The key difference here is that these
| images _aren 't_ exact frames. They're close, of course,
| but close is not identical.Is there another instance you
| can point to that shows what you've described?
|
| [1] https://spectrum.ieee.org/midjourney-copyright
| jazzyjackson wrote:
| that's a good reference and yes that's what i'm talking
| about, but i mixed up the fact that you can get simpsons
| and star wars without asking for it by name - the "find
| the difference between these two pictures" game is the
| result of asking for a specific movie. I stand by my
| point tho that this is not substantially different than
| querying a database for copywrited material
| juunpp wrote:
| > You can't open up the database and find a given duplicate
| of a work. Instead you have some abstraction of what it
| takes to get to a given work.
|
| So distributing a zip file of a copyrighted work subverts
| the copyright?
| SamPatt wrote:
| Illegality and wrongdoing are completely distinct categories.
|
| I'm not convinced that most copyright infringements are
| immoral regardless of their legal status.
|
| If you post your images for the world to see, and someone
| uses that image, you are not harmed.
|
| The idea that the world owes you something after you
| deliberately shared it with others seems bizarre.
| brookst wrote:
| Imagine if every book or advertisement or public
| conversation you overheard led to future claims that you
| had unethically learned from public information. It's such
| a weird worldview.
|
| (BTW I forbid you from using my comment here in your future
| reasoning)
| fzeroracer wrote:
| > If you post your images for the world to see, and someone
| uses that image, you are not harmed.
|
| Let me define a few cases of 'uses that image' and see
| where your line in the sand drops
|
| * If someone used that image as part of an advertising
| campaign for their product, they are profiting off your
| work. Are you not harmed?
|
| * If someone used that image and pretended they created it.
| Are you not harmed?
|
| * If someone used that image and sold it directly. Are you
| not harmed?
| SamPatt wrote:
| Someone claiming they created something which I created
| is the closest to harm on that list. Fraud should be
| punished.
|
| The others aren't harmful, unless you're defining harm to
| include the loss of something which someone believes they
| are entitled to, a concept which is fraught with
| problems.
|
| Creating an image (or any non-physical creation) doesn't
| obligate the world to compensate you for your work. If
| you choose to give it away by posting it on the internet,
| that's your choice, but you are entitled to nothing.
| johnnyanmac wrote:
| >I'm not convinced that most copyright infringements are
| immoral regardless of their legal status.
|
| You're right and wrong. You're right because most
| infringement is from people who can do minimal damage and
| in fact so more help by giving awareness to your works by
| sharing. But this is only becsuse copyright it working
| (most of the time) against corporate entities who don't
| want to leave any room for legalities to come in.
|
| If copyright ended I'd bet my bottom dollar Disney and all
| the other billionaires companies would be spamming any and
| everything that gets moderately popular. And Disney can put
| advertise the original artist easily.
| juunpp wrote:
| My statement above is that their activities are illegal and
| wrong, not that one implies the other. They are illegal
| because of the copyright violation, and wrong because
| regardless of what the law says, using the images for
| training despite the artists' every appeal to the contrary
| (being vocal about the issue, putting a robots.txt file to
| avoid scraping, and now using adversarial techniques to
| protect their work from being stolen) is just moronic. It's
| like shitting on their front yard when they've asked you a
| million times not to shit in the front yard, put a sign
| that says "Please don't shit on my front yard", and sprayed
| insecticide all over the grass to try to deter you from
| shitting on the front yard. And yet you still shit on their
| front yard and even have the balls to argue that there's
| nothing wrong or illegal about it. This is absolutely
| insane.
| fiddlerwoaroof wrote:
| > This is simple law.
|
| "One may well ask: 'How can you advocate breaking some laws
| and obeying others?' The answer lies in the fact that there
| are two types of laws: just and unjust. I would be the first
| to advocate obeying just laws. One has not only a legal but a
| moral responsibility to obey just laws. Conversely, one has a
| moral responsibility to disobey unjust laws. I would agree
| with St. Augustine that 'an unjust law is no law at all.'"
| jazzyjackson wrote:
| Well, fine, but you'll have to claim that copyright is
| unjust and that you are breaking the law as an act of civil
| disobedience. The AI corps do not want to take this stance
| as they also have intellectual property to protect. Classic
| "have their cake and eat it too" scenario.
| chefandy wrote:
| Not everybody equates automated scraping for training models
| and human experience. Just like any other "data wants to be
| free" type of discussion, the philosophical and ethical
| considerations are anything but cut-and-dried, and they're far
| more consequential than the technical and economics-in-a-vacuum
| ones. The general public will quite possibly see things
| differently than the "oh well, artists-- that's the free market
| for ya, and you lost" crowd.
| kmeisthax wrote:
| We're not trying to keep the AI from learning general ideas,
| we're trying to keep it from memorizing specific
| expressions[0]. There's a growing body of research to show that
| these models are doing a lot of memorizing, even if they're not
| regurgitating that data. For example, Google's little "ask GPT
| to repeat a word forever" trick, which will make GPT-4 spit out
| verbatim training data[1].
|
| If there was a training process that let us pick a minimal
| sample of examples and turn it into a general purpose art
| generator or text generator, I think people would have been
| fine with that. But that's not what any of these models do.
| They were trained on shittons of creative expression, and
| there's statistical evidence that the models retain that
| expression, in a way that is fundamentally different from how
| humans remember, misremember, adapt, remix, and/or "play around
| with" other people's creativity.
|
| [0] You called these "embodiments", but I believe you're trying
| to invoke the idea/expression divide, so I'll run with that.
|
| [1] Or at least it did. OpenAI now filters out conversations
| that trip the bug.
| wredue wrote:
| Kick ass.
|
| I now declare that I own Fortnite.
|
| Where's my money, Epic?
| gfodor wrote:
| Huge market for snake oil here. There is no way that such tools
| will ever win, given the requirements the art remain viewable to
| human perception, so even if you made something that worked
| (which this sounds like it doesn't) from first principles it will
| be worked around immediately.
|
| The only real way for artists or anyone really to try to hold
| back models from training on human outputs is through the law,
| ie, leveraging state backed violence to deter the things they
| don't want. This too won't be a perfect solution, if anything it
| will just put more incentives for people to develop decentralized
| training networks that "launder" the copyright violations that
| would allow for prosecutions.
|
| All in all it's a losing battle at a minimum and a stupid battle
| at worst. We know these models can be created easily and so they
| will, eventually, since you can't prevent a computer from
| observing images you want humans to be able to observe freely.
| AJ007 wrote:
| The level of claims accompanied by enthusiastic reception from
| a technically illiterate audience make it sound, smell, and
| sound like snake oil without much deep investigation.
|
| There is another alternative to the law. Provide your art for
| private viewing only, and ensure your in person audience does
| not bring recording devices with them. That may sound absurd,
| but it's a common practice during activities like having sex.
| gfodor wrote:
| True I can imagine that kind of thing becoming popular.
| Art9681 wrote:
| This would just create a new market for art paparazzis who
| would find any and all means to inflitrate such private
| viewings with futuristic miniature cameras and other sensors
| and selling it for a premium. Less than 24 hours later the
| files end up on hundreds or thousands of centralized and
| decentralized servers.
|
| I'm not defending it. Just acknowledging the reality. The
| next TMZ for private art gatherings is percolating in
| someone's garage at the moment.
| jurassic wrote:
| I find this difficult to believe; no matter how small your
| camera is, photography is about light. Art reproduction
| photography is surprisingly hard to do if you care about
| the quality of the end result. Unless you can
| surreptitiously smuggle in a studio lighting setup, tripod,
| and color checker card... sure you can take an image in
| secret, but not one that is a good representation of the
| real thing.
| Gormo wrote:
| That doesn't sound like a viable business model. There seems
| to be a non-trivial bootstrap problem involved -- how do you
| become well-known enough to attract audiences to private
| venues in sufficient volume to make a living? -- and would in
| no way diminish demand for AI-generated artwork which would
| still continue to draw attention away from you.
| wraptile wrote:
| The thing is people want the benefits of having their stuff
| public but not bear the costs. Scraping has been mostly a
| solved problem especially when it comes to broad crawling.
| Put it under a login, there, no more AI "stealing" your work.
| 946789987649 wrote:
| Is that login statement strictly true? Unless the login is
| paid, there's no reason we can't get to (if not already
| there) the point where the AI scraper can just create a
| login first.
| Tade0 wrote:
| But then you can rate-limit to a point where scraping
| everything will take a considerable amount of time.
|
| Of course the workaround would be to have multiple
| accounts, but that in turn can be made unscalable with a
| "prove you're human" box.
| csydas wrote:
| you are not incorrect that this would help mitigate, but
| it still misses a few key points I think regarding why
| artists are upset about AI generation
|
| - This is still vulnerable to stuff like mturk or even
| just normal users who did get past the anti-bot things
| pulling and re-uploading the content elsewhere that is
| easier for the AI companies to use
|
| - The artists' main contention is that the AI companies
| shouldn't be allowed to just use whatever they find
| without confirm they have a license to use the content in
| this way
|
| - If someone's content _does_ get into an AI model and
| it's determined somehow (I think there is a case with a
| news paper and chatGPT over this very issue?), the legal
| system doesn't really have a good framework for this
| situation right now -- is it copyright infringement?
| (arguably not? it's not clear) is it plagiarism?
| (arguably yes, but plagiarism in US court system is very
| hard to proof/get action on) is it license violation?
| (for those who use licenses for their art, probably yes,
| but it's the same issue as plagiarism -- how to prove it
| effectively?)
|
| Really what this comes down to is that the AI companies
| use the premise that they have a right to use someone
| else's works without consent for the AI training. While
| your suggestions are technically correct, it puts the
| impetus on the artists that they must do something
| different because the AI companies are allowed to train
| their models as they currently do without recourse for
| the original artist. Maybe that will be ruled true in the
| future I don't know, but I can absolutely get why artists
| are upset about this premise shaping the discussion on AI
| training, as such a premise negates their rights as an
| artist and many artists have 0 path for recourse. I'm
| pretty sure that OpenAI wouldn't think about scraping a
| Disney movie from a video upload site just because it's
| open access since Disney likely can fight in a more
| meaningful way. I would agree with artists who are
| complaining that they shouldn't need to wait for a big
| corporation to decide that this behavior is undesirable
| before real action is taken, but it seems that is going
| to be what is needed. It might be reality, but it's a
| very sad reality that people want changed.
| wraptile wrote:
| No, eforcing click-wrap legal agreements is actually
| possible. With basic KYC the scraper would instantly open
| up itself for litigation and no internet art piece is
| frankly worth this sort of trouble.
| csydas wrote:
| I don't think that's true at all. Images and text get
| reposted with or without consent, often without
| attribution. It wouldn't make it right for the AI companies
| to scrape when the original author doesn't want that but
| someone else has ignored their wishes and requirements.
| Basically, what good is putting your stuff behind login or
| some other restrictive viewing method if someone just saves
| the image/text? I think it's still a relatively serious
| problem for people creating things. And without some form
| of easy access to viewing, the people creating things don't
| get the visibility and exposure they need to get an
| audience/clients.
|
| This is one the AI companies should offer the olive branch
| on IMO, there must be a way to use stenography to
| transparently embed a "don't process for AI" code into an
| image or text or music or any other creative work that
| won't be noticeable by humans, but the AI would see if it
| tried to process the content for training. I think it would
| be a very convenient answer and probably not be detrimental
| to the AI companies, but I also imagine that the AI
| companies would not be very eager to spend the resources
| implementing this. I do think they're the best source for
| such protections for artists though.
|
| Ideally, without a previous written agreement for a dataset
| from the original creators, the AI companies probably
| shouldn't be using it for training at all, but I doubt that
| will happen -- the system I mention above should be _opt-
| in_, that is, you must tag such content that is free to be
| AI trained in order for AI to be trained on it, but I have
| 0 faith that the AI companies would agree to such a self-
| limitation.
|
| edit: added mention to music and other creative works in
| second paragraph 1st sentence
|
| edit 2: Added final paragraph as I do think this should be
| opt-in, but don't believe AI companies would ever accept
| this, even though they should by all means in my opinion.
| amlib wrote:
| Here are my 2 cents, I think we will need some laws
| specifying two types of AI models, ones trained with full
| consent (opt-in) for its training material and ones
| without. The first one would be like Adobe's firefly
| model where they allegedly own everything they trained it
| with, or something where you go around asking for consent
| for each thing in your training corpus (probably
| unfeasible for large models). Maybe things in the public
| domain would be ok to train with. In this case there are
| no restrictions and the output from such models can even
| be copyrighted.
|
| Now for the second type, representing models such as
| Stable Difusion and Chat GPT, it would be required to
| have their trained model freely available to anyone and
| any resulting output would not be copyrightable. It may
| be a more fairer way of allowing anyone to harness the
| power of AI models that contain essentially the knowledge
| of all man kind, but without giving any party an unfair
| monopoly on it.
|
| This should be easily enforceable for big corporations,
| else it would be too obvious if they are trying to pass
| one type model as another or even keep the truth about
| their model from leaking. It might not be as easy to keep
| small groups or individuals from breaking those rules,
| but hey, at least it evens the playing field.
| thfuran wrote:
| >There is no way that such tools will ever win, given the
| requirements the art remain viewable to human perception
|
| On the other hand, the adversarial environment might push
| models towards a representation more aligned with human
| perception, which is neat.
| aqfamnzc wrote:
| The ol' Analog Gap. https://en.m.wikipedia.org/wiki/Analog_hole
| Reubend wrote:
| > Huge market for snake oil here.
|
| This tool is free, and as far as I can tell it runs locally. If
| you're not selling anything, and there's no profit motive, then
| I don't think you can reasonably call it "snake oil".
|
| At worst, it's a waste of time. But nobody's being deceived
| into purchasing it.
| autoexec wrote:
| If this is a danger from "snake oil" of this type, it'd be
| from the other side, where artists are intentionally tricked
| into believing that tools like this mean that AI isn't or
| won't be a threat to their copyrights in order to get them to
| stop opposing it so strongly, when in fact the tool does
| nothing to prevent their copyrights from being violated.
|
| I don't think that's the intention of Nightshade, but I
| wouldn't put past someone to try it.
| Biganon wrote:
| There's an academic paper being published.
|
| Snake oil for the sake of getting published is a very real
| problem that does exist.
| golol wrote:
| Religion is also deceptive and snake-oil even if it does not
| involve profit driven motivations.
| NoahKAndrews wrote:
| It very often does involve such motivations, though I agree
| with your larger point.
| jedberg wrote:
| Everything old is new again. It's the same thing with any DRM
| that happens on the client side. As long as it's viewable by
| humans, someone will figure out a way to feed that into a
| machine.
| AlfeG wrote:
| My guess. Is that at some poi t of time You will not be able to
| use any generated image or video in commercial. Because of 100%
| copyright claim for using parts of copyrighted image. Like
| YouTube those days. When some random beeps matches with someone
| music...
| abrarsami wrote:
| It should be like that. I agree
| spaceman_2020 wrote:
| This is the hard reality. There is no putting this genie back
| in the bottle.
|
| The only way to be an artist now is to have a unique style of
| your own, and to never make it online.
| hutzlibu wrote:
| "and to never make it online."
|
| So then of course, you also cannot sell your work, as those
| might put it online. And you cannot show your art to big
| crowds, as some will make pictures and put it online. So ...
| you can become a literal underground artists, where only some
| may see your work. I think only some will like that.
|
| But I actually disagree, there are plenty of ways to be an
| artist now - but most should probably think about including
| AI as a tool, if they still want to make money. But with the
| exception of some superstars, most artists are famously low
| on money - and AI did not introduce this. (all the
| professional artists I know, those who went to art school -
| do not make their income with their art)
| sabedevops wrote:
| Can you elaborate on how they supplement their income?
| hutzlibu wrote:
| Every other source of income? So other, art-unrelated
| jobs.
| BeFlatXIII wrote:
| GP almost certainly mean "make physical art." Pictures of
| that can get online, but it's not the real thing.
| jMyles wrote:
| > leveraging state backed violence to deter the things they
| don't want
|
| I just want to say: I really appreciate the stark terms in
| which you've put this.
|
| The thing that has come to be called "intellectual property" is
| actually just a threat of violence against people who arrange
| bytes in a way that challenges power structures.
| vmirnv wrote:
| I'm thinking -- is it possible to create something on a global
| level similar to what they did in Snapchat: some sort of image
| flickering that would be difficult to parse, but still
| acceptable for humans?
| honkycat wrote:
| "A law, ie, leveraging state backed violence to deter the
| things they don't want."
|
| We all know what a law is you don't need to clarify. It makes
| your prose less readable.
| minimaxir wrote:
| A few months ago I made a proof-of-concept on how finetuning
| Stable Diffusion XL on known bad/incoherent images can actually
| allow it to output "better" images if those images are used as a
| negative prompt, i.e. specifying a high-dimensional area of the
| latent space that model generation should stay away from:
| https://news.ycombinator.com/item?id=37211519
|
| There's a nonzero chance that encouraging the creation of a large
| dataset of known tampered data can ironically _improve_
| generative AI art models by allowing the model to recognize
| tampered data and allow the training process to work around it.
| k__ wrote:
| What are LLMs that was trained with public domain content only?
|
| I would believe there is enough content out there to get
| reasonably good results.
| squidbeak wrote:
| I really don't understand the anxiety of artists towards AI - as
| if creatives haven't always borrowed and imitated. Every leading
| artist has had acolytes, and while it's true no artist ever had
| an acolyte as prodigiously productive as AI will be, I don't see
| anything different between a young artist looking to Picasso for
| cues and Stable Diffusion or DALL-E doing the same. Styles and
| methods haven't ever been subject to copyright - and art would
| die the moment that changed.
|
| The only explanation I can find for this backlash is that artists
| are actually worried just like the rest of us that pretty soon AI
| will produce higher quality more inventive work faster and more
| imaginatively than they can - which is very natural, but not a
| reason to inhibit an AI's creative education.
| beepbooptheory wrote:
| This has been litigated over and over again, and there have
| been plenty of good points made and concerns raised over it by
| those who it actually affects. It seems a little bit
| disingenuous (especially in this forum) to say that that
| conclusion is the "only explanation" you can come up with. And
| just to avoid prompting you too much: trust me, we all know or
| can guess why you think AI art is a good thing regardless of
| any concerns one might bring up.
| jwells89 wrote:
| Imitation isn't the problem so much as it is that ML generated
| images are composed of a mush of the images it was trained on.
| A human artist can abstract the concepts underpinning a style
| and mimic it by drawing all-new lineart, coloration, shading,
| composition, etc, while the ML model has to lean on blending
| training imagery together.
|
| Furthermore there's a sort of unavoidable "jitter" in human-
| produced art that varies between individuals that stems from
| vastly different ways of thinking, perception of the world,
| mental abstraction processes, life experiences, etc. This is
| why artists who start out imitating other artists almost always
| develop their imitations into a style all their own -- the
| imitations were already appreciably different from the original
| due to the aforementioned biases and those distinctions only
| grow with time and experimentation.
|
| There would be greatly reduced moral controversy surrounding ML
| models if they lacked that mincemeat/pink slime aspect.
| will5421 wrote:
| I think the artists need to agree to stop making art altogether.
| That ought to get people's attention. Then the AI people might
| (be socially pressured or legally forced to) put their tools
| away.
| CatWChainsaw wrote:
| No, they'll just demand that artists produce more art so they
| can continue scraping, because if you work in tech you're
| allowed to be entitled, you're The Face Of The Future and all
| you're trying to do is Save The World, all these decels are
| just obstacles to be destroyed.
| Zetobal wrote:
| Well, at least for sdxl it's not working neither in LoRa nor
| dreambooth finetunes.
| chris-orgmenta wrote:
| I want _progressive fees_ on copyright /IP/patent usage, and
| worldwide gov cooperation/legislation (and perhaps even worldwide
| ability to use works without obtaining initial permission,
| although let's not go into that outlandish stuff)
|
| I want a scaling license fee to apply (e.g. % pegged to revenue.
| This still has an indirect problem with different industries
| having different profit margins, but still seems the fairest).
|
| And I want the world (or EU, then others to follow suit) to
| slowly reduce copyright to 0 years* after artists death if owned
| by a person, and 20-30 years max if owned by a corporation.
|
| And I want the penalties for not declaring usage** / not paying
| fees, to be incredibly high for corporations... 50% gross
| (harder) / net (easier) profit margin for the year? Something
| that isn't a slap on the wrist and can't be wriggled out of
| _quite_ so easily, and is actually an incentive not to steal in
| the first place.)
|
| [*]or whatever society deems appropriate.
|
| [**]Until auto-detection (for better or worse) gets good enough.
|
| IMO that would allow personal use, encourages new entrants to
| market, encourages innovation, incentivises better behaviour from
| OpenAI et al.
| Dylan16807 wrote:
| > And I want the world (or EU, then others to follow suit) to
| slowly reduce copyright to 0 years* after artists death if
| owned by a person, and 20-30 years max if owned by a
| corporation.
|
| Why death at all?
|
| It's icky to trigger soon after death, it's bad to have
| copyright vary so much based on author age, and it's bad for
| many works to still have huge copyright lengths.
|
| It's perfectly fine to let copyright expire during the author's
| life. 20-30 years for everything.
| wraptile wrote:
| Extremely naive to think that any of this could be enforced to
| any adequate level. Copyright is fundamentally broken and
| putting some plasters on it is not going to do much especially
| when these plasters are several decades too late.
| whywhywhywhy wrote:
| Why are there no examples?
| arisAlexis wrote:
| Wouldn't this be applicable to text too?
| matteoraso wrote:
| Too little, too late. There's already very large high quality
| datasets to train AI art generators.
| eigenvalue wrote:
| This seems like a pretty pointless "arms race" or "cat and mouse
| game". People who want to train generative image models and who
| don't care about what artists think about it at all can just do
| some basic post-processing on the images that is just enough to
| destroy the very carefully tuned changes this Nightshade
| algorithm makes. Something like resampling it to slightly lower
| resolution and then using another super-resolution model on it to
| upsample it again would probably be able to destroy these subtle
| tweaks without making a big difference to a human observer.
|
| In the future, my guess is that courts will generally be on the
| side of artists because of societal pressures, and artists will
| be able to challenge any image they find and have it sent to yet
| another ML model that can quickly adjudicate whether the
| generated image is "too similar" to the artist's style (which
| would also need to be dissimilar enough from everyone else's
| style to give a reasonable legal claim in the first place).
|
| Or maybe artists will just give up on trying to monetize the
| images themselves and focus only on creating physical artifacts,
| similar to how independent musicians make most of their money
| nowadays from touring and selling merchandise at shows (plus
| Patreon). Who knows? It's hard to predict the future when there
| are such huge fundamental changes that happen so quickly!
| hackernewds wrote:
| the point is you could circumvent one nightshade, but as long
| as the cat and mouse game continues there can be more
| johnnyanmac wrote:
| >Or maybe artists will just give up on trying to monetize the
| images themselves and focus only on creating physical
| artifacts, similar to how independent musicians make most of
| their money nowadays from touring and selling merchandise at
| shows (plus Patreon).
|
| As is, art already isn't a sustainable career for most people
| who can't get a job in industry. The most common monetization
| is either commissions or hiding extra content behind a pay
| wall.
|
| To be honest I can see more proverbial "Furry artists"
| sprouting up in a cynical timeline. I imagine like every other
| big tech that the 18+ side of this will be clamped down hard by
| the various powers that be. Which means NSFW stuff will be
| shielded a bit by the advancement and you either need to find
| underground training models or go back to an artist. .
| Gigachad wrote:
| >need to find underground training models
|
| It's not particularly that hard. The furry nsfw models are
| already the most well developed and available models you can
| get right now. And they are spitting out stuff that is almost
| indistinguishable from regular art.
| raincole wrote:
| > This seems like a pretty pointless "arms race" or "cat and
| mouse game".
|
| If there is any "point" of this, it's that's going to push the
| AI models to become _better_ at capturing how humans see
| things.
| jMyles wrote:
| > musicians make most of their money nowadays from touring and
| selling merchandise at shows
|
| Be reminded that this is - and has always been - the mainstream
| model of the lineages of what have come to be called
| "traditional" and "Americana" and "Appalachian" music.
|
| The Grateful Dead implemented this model with great finesse,
| sometimes going out of their way to eschew intellectual
| property claims over their work, in the belief that such claims
| only hindered their success (and of course, they eventually
| formalized this advocacy and named it "The Electronic Frontier
| Foundation" - it's no coincidence that EFF sprung from deadhead
| culture).
| jdeaton wrote:
| Sounds like free adversarial data augmentation.
| Devasta wrote:
| Delighted to see it. Fuck AI art.
| aussieguy1234 wrote:
| The image generation models now are at the point where they can
| produce their own synthetic training images. So I'm not sure how
| big of an impact something like this would have.
| matt3210 wrote:
| Put a TOC on all your content that says "by using my content for
| AI you agree to pay X per image" and then send them a bill once
| you see it in an AI.
| wruza wrote:
| Once you see what exactly? "AI" isn't some image filter from
| the early 2000s.
| paul7986 wrote:
| Any AI art/video/photography/music/etc generator company who
| generates revenue needs to add watermarks to let the public know
| its AI generator. This should be forced via legislation in all
| countries.
|
| If they don't then whatever social network or other services
| where things can shared/viewed by large groups to millions & are
| posted publicly need to be labeled "We can not verify veracity of
| this content."
|
| I want a real internet ..this AI stuff is just triple fold
| increasing fake crap on the Internet and in turn / time our trust
| in it!
| mmaunder wrote:
| Trying to convince an AI it sees something and a human they don't
| is probably a losing battle.
| ngneer wrote:
| I love it. This undermines the notion of ground truth. What
| separates correct information from incorrect information? Maybe
| nothing! I love how they acknowledge the never ending attack
| versus defense game. In stark contrast to "our AI will solve all
| your problems".
| 24karrotts_ wrote:
| If you decrease quality of art, you give AI all the advantage in
| the market.
| iLoveOncall wrote:
| I wonder if this is illegal in some countries. In France for
| example, there is the following law: "Obstructing or distorting
| the operation of an automated data processing system is
| punishable by five years' imprisonment and a fine of
| EUR150,000.".
|
| If you ask me, this is 100% applicable in this case, so I wonder
| what a judge would rule.
| snerc wrote:
| I wonder if we know enough about any of these systems to make
| such claims. This is all predicated on the fact that this tool
| will be in widespread use. If it is somehow widely used beyond
| the folks who have seen it at the top of HN, won't the big firms
| have countermeasures, ready to deploy?
| mattszaszko wrote:
| This timeline is getting quite similar to the second season of
| Pantheon.
| nnevatie wrote:
| The intention is good, from an AI-opponent's perspective. I don't
| think will work practically, though. The drawbacks for actual
| users of the image galleries, plus the level of complexity
| involved in poisoning the samples makes this unfeasible to
| implement at the scale required.
| ThinkBeat wrote:
| In so far as anger goes against AIs being trained on particular
| intellectual properties.
|
| A made up scenario1 is that a person who is training an AI, goes
| to the local library and checks out 600 books on art. The person
| then lets the AI read all of them. After which they are returned
| to the library and another 600 books are borrowed
|
| Then we can imagine the AI somehow visiting a lot of museums and
| galleries.
|
| The AI will now have been trained on the style and looks of a lot
| of art from different artists
|
| All the material has been obtained in a legal manner.
|
| Is this an acceptable use?
|
| Or can an artist still assert that the AI was trained with their
| IP without consent?
|
| Clearly this is one of the ways a human would go about learning
| about styles, techniques etc..
|
| 1 Yes you probably cannot borrow 600 books at a time. How does
| the AI read the books? I dont know. Simplicity would be that the
| researcher takes a photo of each page. This would be extremmly
| slow but for this hypothetical it is acceptable.
| nanofus wrote:
| I think the key difference here is that the most prominent
| image generation AIs are commercial and for-profit. The
| scenarios you describe are comparing a commercial AI to a
| private person. You cannot get a library card for a company,
| and you cannot bring a photography crew to a gallery without
| permission.
| drdrek wrote:
| Only protection is adding giant gaping vaginas to your art,
| nothing less will deter scraping. If the Email spam community
| showed us something in the last 40 years is that no amount of
| defensive tech measures will work except financial disincentives.
| rvba wrote:
| The opening website is so poor - "what is nightshade" - then a
| whole paragraph that tells nothing, then another paragraph.. then
| no examples. This whole description should be reworked to be
| shorter and more to the point.
| paulsutter wrote:
| Cute. The effectiveness of any technique like this will be short-
| lived.
|
| What we really need is clarification of the extent that copyright
| protection extends to similar works. Most likely from an AI
| analysis of case law.
___________________________________________________________________
(page generated 2024-01-21 23:01 UTC)