[HN Gopher] Getty Images v. Stability AI - Complaint
       ___________________________________________________________________
        
       Getty Images v. Stability AI - Complaint
        
       Author : toss1
       Score  : 188 points
       Date   : 2023-02-05 19:58 UTC (3 hours ago)
        
 (HTM) web link (copyrightlately.com)
 (TXT) w3m dump (copyrightlately.com)
        
       | 29athrowaway wrote:
       | Getty Images is forced to do it.
       | 
       | Building a collection of stock images is the predecessor to
       | generating images from text.
        
       | kgwgk wrote:
       | "Making matters worse, Stability AI has caused the Stable
       | Diffusion model to incorporate a modified version of the Getty
       | Images' watermark to bizarre or grotesque synthetic imagery that
       | tarnishes Getty Images' hard-earned reputation"
       | 
       | Not sure about their reputation but they have a point with the
       | bizarre/grotesque thing.
        
         | [deleted]
        
         | [deleted]
        
       | NoZebra120vClip wrote:
       | Nevermind Getty, I wonder how contributors who use Creative
       | Commons licenses will feel. Anyone who's uploaded to Flickr, or
       | Wikimedia Commons, or even YouTube could be victimized by AI
       | generation.
       | 
       | The AI will launder the content just like GitHub's CoPilot, and
       | attribution will be impossible on the other end. Since Creative
       | Commons licenses are not PD, and often do require attribution
       | (CC-BY) or they prohibit commercial usage (CC-NC) or they require
       | that derivatives must be licensed the same way (CC-SA) or even
       | prohibit derivatives outright (CC-ND) all of those requirements
       | are going to be stomped into dust by generative AI.
       | 
       | And those licensors won't be big enough to sue anyone.
        
         | celestialcheese wrote:
         | This is a strange take. Folks who choose to use a CC license do
         | so because they want their work in the public and to be "Use(d)
         | & Remix(ed)" https://creativecommons.org/use-remix/. The
         | creative commons is very clearly designed to encourage reuse,
         | remix and sharing, and the practice of suing over not following
         | terms to the letter is a gross warping of the license that's
         | happened over the last few years with copyright trolls. The
         | Creative Commons organization has explicitly called this out.
         | [1]
         | 
         | But I would argue that Stable Diffusion with the open-sourcing
         | of their model weights, and use of the LAION dataset which is
         | released under CC-BY 4.0, would likely meet both the letter and
         | intent of the license.
         | https://wiki.creativecommons.org/wiki/CC_Attribution-ShareAl...
         | 
         | 1 - https://creativecommons.org/2022/02/08/copyleft-trolls/
        
           | NoZebra120vClip wrote:
           | Are you trying to tell us here that if someone refuses to
           | conform to the terms of my Creative Commons license, such as
           | attribution, that I would be wrong to sue them over copyright
           | violation? Folks who use specific CC licenses want the
           | licensees to abide by those terms, and we can legally enforce
           | that compliance. Content creators are not copyright trolls,
           | so please do not tar them all with the same brush.
        
             | LegitShady wrote:
             | you could sue them but there are basically no damages, so
             | it wouldn't matter.
        
             | celestialcheese wrote:
             | Of course not all content creators are trolls. The jump to
             | sue and the assumption that all of this is somehow
             | victimizing everyone who has released content in the CC
             | license is troll-like thinking.
             | 
             | Like I said before, I believe there's a strong argument
             | that Stability AI / LAION's use of CC-{BY/SA/ND} is likely
             | allowed under the terms of all CC licenses due to the works
             | being shared without alteration and with attribution,
             | released under CC-BY-SA 4.0 (LAION) and the Stable
             | Diffusion model being released under a permissive license
             | (CreativeML Open RAIL-M).
             | 
             | The real question is if the images generated by the models
             | need to provide attribution to every single weight involved
             | in generating that image. That's a lot more complicated and
             | unclear, but quickly gets into questions like "Should
             | artistic style be copyrightable?" and "What amount of
             | source material is required to constitute a copyrighted
             | work?". But as of right now, I don't see how any of this is
             | violating the letter or intent of CC 4.0
        
       | wizzwizz4 wrote:
       | Direct link: https://copyrightlately.com/wp-
       | content/uploads/2023/02/Getty...
        
       | extheat wrote:
       | Copyrighted content on the internet isn't a free for all for
       | people to train models with. No matter the good intentions, if
       | Stability AI didn't take reasonable steps to remove copyrighted
       | data from the training set then IMO they have an uphill battle to
       | prove the case to a jury (if it gets there).
        
         | visarga wrote:
         | Good, how about training on variations of real images?
         | Variations should not be copyrightable since they contain no
         | human input, and they should be sufficiently different from the
         | originals. So the trained model can't possibly reproduce
         | exactly any original because it hasn't seen one.
        
       | celestialcheese wrote:
       | I wonder if Getty and Stability AI were ever in negotiation for
       | licensing their work and this lawsuit is fallout.
       | 
       | Getty announced in Oct they partnered with BRIA (?) to provide
       | generative AI tools using their licensed images [1], and
       | Shutterstock announced a partnership with OpenAI [2].
       | 
       | So it's clear these rights holders are OK with generative AI, as
       | long as they continue to extract their pound of flesh. The
       | language around "protecting artists" is horseshit - if you're a
       | creative and you see Disney, Getty, etc getting behind your
       | cause, you should look _very_ carefully around and make sure
       | you're not the one being screwed.
       | 
       | 1 - https://newsroom.gettyimages.com/en/getty-images/bria-
       | partne... 2 -
       | https://www.theverge.com/2022/10/25/23422359/shutterstock-ai...
        
         | madeofpalk wrote:
         | It's not hypocritical to be upset at someone else stealing your
         | content to sell it, just because you sell it yourself. That's
         | probably a part of why you're upset - because you don't get the
         | money.
         | 
         | My understanding is that _in general_ , artists aren't
         | necessarily against generative AI, but their complaint is
         | (partly) around a complete lack of consent of being a part of
         | these training models.
        
         | jalev wrote:
         | From the article it might be read as "Stability AI didn't even
         | bother attempting to reach out"
         | 
         | > Rather than attempt to negotiate a license with Getty Images
         | for the use of its content, and even though the terms of use of
         | Getty Images' websites expressly prohibit unauthorized
         | reproduction of content for commercial purposes such as those
         | undertaken by Stability Al, Stability AI has copied at least 12
         | million copyrighted images from Getty Images' websites, along
         | with associated text and metadata, in order to train its Stable
         | Diffusion model.
        
         | rosywoozlechan wrote:
         | > as they continue to extract their pound of flesh
         | 
         | Be compensated for the use of their content is maybe a more
         | accurate and friendlier way to phrase this, but I guess not as
         | edgy.
        
           | Aeolun wrote:
           | Is it truly their content? It's private people uploading
           | these pictures to these sites right?
        
             | notahacker wrote:
             | Private people who agreed that Getty would manage their
             | distribution and compensate them when their image was used
             | by one of Getty's clients, yes.
             | 
             | So yeah, not only is it very much Getty's content to
             | distribute, but Stability AI is absolutely screwing over
             | thousands of little guys by not paying royalties too...
        
           | hobobaggins wrote:
           | It's actually a fairly common idiom in English, dating prior
           | to Shakespeare:
           | 
           | https://en.wikipedia.org/wiki/Pound_of_Flesh
        
           | [deleted]
        
         | ninth_ant wrote:
         | I don't feel like it's even slightly contradictory for Getty to
         | simultaneously be OK with licensing their content to someone
         | for AI modeling, but not being OK with a random unaffiliated
         | company using their IP. It feels like pretty straightforward
         | copyright management.
         | 
         | "Protecting artists" is not something they claimed as if they
         | were opposed to AI usage at all. In their press release they
         | said:
         | 
         | > It is Getty Images' position that Stability AI unlawfully
         | copied and processed millions of images protected by copyright
         | and the associated metadata owned or represented by Getty
         | Images absent a license to benefit Stability AI's commercial
         | interests and to the detriment of the content creators.
         | 
         | It's clear from this that the issue was not that Stability is
         | an AI company, but that it's unlicensed. Getting an exclusive
         | license to the images is specifically what they pay
         | contributors for. Having those copyrights infringed by a
         | competitor makes the content less valuable to Getty, and
         | disincentivizes Getty from paying for new content in the
         | future.
         | 
         | So yeah, it's plausible that this behaviour from Stability
         | could harm content creators. Not because it's AI, but because
         | its just run-of-the-mill unauthorized usage.
        
       | jbenjoseph wrote:
       | I don't see this working for Getty but it will be interesting to
       | see. Israel and the EU have preemptively determined that the use
       | of copyrighted data to train ML models is almost always fair use,
       | and I believe the Authors Guild case in the USA also sets strong
       | precedent there. That really leaves the UK, and they are likely
       | to go with everyone else.
        
       | QuantumGood wrote:
       | A problem is that visual artists could be paid a lower licensing
       | fee for using their art to feed an AI than they would be in a
       | per-image fee to the end user.
        
       | KKKKkkkk1 wrote:
       | Let's say that a human painter learned to paint by reproducing
       | copyrighted works. Would the copyright owners have a claim on
       | that painter's work after he finished learning?
        
         | LegitShady wrote:
         | but they're not humans, so it doesn't matter.
        
         | visarga wrote:
         | Historically apprentices had to work for years in their
         | master's shops, and their work was passed as belonging to their
         | master.
        
           | ChrisMarshallNY wrote:
           | There's a couple of Mona Lisas. I think that these were done
           | by Da Vinci's apprentices.
        
         | j-bos wrote:
         | This argument has been feeling flat for a while (to me!) And
         | now I think I can articulate why. This isn't a human, until the
         | AI systems are granted personhood, this is a tool, regardless
         | of how it works under the hood. So for me the question is,
         | would the user (trainer) be allowed to view all the source
         | material? Yes? Then would the user/trainer be allowed to
         | produce content based off the source data? Seems like yes, as
         | long as the actual minamally/unmodified sources are not
         | "copied" into the model.
         | 
         | Of course copyright is an abstract legal tool, so mo argumemt
         | is worth anything until it's codified into law/precedent.
        
         | vesinisa wrote:
         | While the algorithm can simulate human learning, surely its
         | outputs are not _original_ copyrightable works. Originality
         | seems to require human action by definition, and is what
         | distinguishes inspired and copied works.
         | 
         | If the outputs are not original, AFAIK they then must be
         | derivative. Stable Diffusion could claim fair use exemption.
         | But fair use too is just meant to protect creativity, again a
         | manifestly manual activity.
         | 
         | I don't know which way I lean, but I sure know the courts will
         | soon have to make some very interesting rulings that will have
         | monumental importance.
         | 
         | Maybe A.I. generated "art" is an entirely new class of work and
         | lawmakers just need to rethink copyright for them.
        
       | braingenious wrote:
       | A lot of people are saying "clearly a violation of copyright" and
       | throwing around the term "derivative work" with the confidence of
       | a seasoned copyright lawyer, but ctrl+f shows only _one single_
       | references to the phrase _transformative work_ on this comment
       | thread.
       | 
       | It might be much more complicated than it appears in the surface!
       | For example, look up Richard Prince!
       | 
       | https://www.cnn.com/2015/05/27/living/richard-prince-instagr...
        
         | [deleted]
        
       | nickelpro wrote:
       | This is largely addressed in Authors Guild v Google, I'll be
       | curious to see if the Delaware court believes differently.
        
       | gamegoblin wrote:
       | Not related to this lawsuit, but related to AI art and copyright
       | in general. What do any HN lawyers make of this?
       | https://twitter.com/heyBarsee/status/1621411036426280961
       | 
       | Basically it's an AI tool that takes a copyrighted photo, and
       | produces an AI-produced photo that is "conceptually identical"
       | but not actually identical.
       | 
       | That is, given a photo of "an asian man lying in a grass field
       | surrounded by a semicircle of driftwood and crows", it would
       | produce a photo that had all of the same concepts, but just
       | slightly different execution on each of them.
       | 
       | The man's face is still asian but clearly a different person. The
       | driftwood is still in a semicircle, but the individual pieces are
       | all different. The crows are still there, but arranged slightly
       | differently. The grass is still grass, but no blades are the
       | same.
       | 
       | So it's essentially cloning the idea/concept/vibes of the target
       | image, but none of the actual "implementation details".
       | 
       | Does anyone have any intuition of the legal outlook on this?
       | 
       | On one hand, nothing is stopping me from seeing the copyrighted
       | photo and then recruiting a similar looking model, setting up a
       | photoshoot in a field with some stuffed crows, etc. I could
       | replicate what the AI is doing. It would be work, but I could do
       | it. The AI is just automating this.
       | 
       | On the other hand, the _actual stated intention_ of this tool is
       | to get around copyright. Seems sketchy.
        
         | jefftk wrote:
         | I doubt this works. For example, here's a case where someone
         | did essentially the same thing manually and lost:
         | https://en.wikipedia.org/wiki/Temple_Island_Collections_Ltd_...
        
         | ronsor wrote:
         | Using img2img to "royalty free"-ize an image is clearly
         | copyright infringement, and the output is a clear derivative
         | work of the original.
         | 
         | If you described the original image (or made CLIP do it) and
         | fed the description to txt2img generation, then you'd probably
         | be fine.
        
           | gamegoblin wrote:
           | Seems like it's a matter of time (or maybe it's already done
           | and I haven't seen it yet) before you get optimizers that
           | takes an image and produces a seed+prompt that yields a super
           | super conceptually similar image.
           | 
           | I think being able to selectively optimize for "magic" random
           | seeds in the diffusion algorithm will be kind of critically
           | important here. Different seeds can produce very different
           | images given the same prompt.
           | 
           | What if an optimizer can find a seed+prompt combos that are
           | just as good as cloning as img2img?
        
             | GaggiX wrote:
             | If you want to clone an image (for some reason) just encode
             | it in latent space and retrieve it using a deterministic
             | sampler such as DDIM. Or you can simply right-click and
             | copy.
             | 
             | If you instead don't want to clone an image, you can just
             | extract the CLIP image embeddings from it and use them to
             | condition a generative model like Dalle 2, Midjourney or
             | Karlo (open source). The CLIP embeddings extract really
             | well the semantic meaning of an image.
        
               | gamegoblin wrote:
               | Going image -> vector space -> image _feels_ very much
               | like compression for the purpose of copyright. Like a
               | lower quality JPEG of some image still has the same
               | copyright properties of the original.
               | 
               | Something about going image -> prompt -> image _feels_
               | like it  "subverts" this somehow, even if the prompt is
               | hyper-optimized to recreate the original image.
               | 
               | Obviously, this is just my feel/impression of it, the
               | real test is how a jury feels about it.
               | 
               | The next few years will be really interesting in exposing
               | that this is a really massive gray area.
        
               | Ensorceled wrote:
               | > Something about going image -> prompt -> image feels
               | like it "subverts" this somehow,
               | 
               | That's only because you understand the algorithm in the
               | first example.
               | 
               | At a jury trial most, if not all, of the jurors will find
               | both methods equally opaque and, I believe, will treat
               | them as equivalent.
        
               | int_19h wrote:
               | I think the jurors would be aware (or at least acceptive)
               | of the notion that describing a picture using words does
               | not create a derived work. That is something a good
               | lawyer could make a show of, even to the point of having
               | an artist draw a picture from such a description.
               | 
               | The vectors are much more opaque because there's no
               | straightforward human equivalent.
        
               | oldgradstudent wrote:
               | > I think the jurors would be aware (or at least
               | acceptive) of the notion that describing a picture using
               | words does not create a derived work.
               | 
               | Describing to a human.
               | 
               | Describing it to stable Diffusion is very different. If
               | you ask for a starry night, you get van Gogh's starry
               | night, sometime with the original frame.
               | 
               | A prompt can easily trigger it to make a derivative work
               | from something in its training set, and it often does.
               | 
               | Any competent expert will tell that to the jury and
               | easily demonstrate it again and again.
        
               | Ensorceled wrote:
               | My point is that if there is a prompt that results in a
               | picture that is a nearly identical copy, the average jury
               | member is going to think "yep, that's a copy".
               | 
               | Trying to explain how that "isn't really a copy" by
               | explaining AI concepts isn't going to win the day, not
               | when they can SEE the copy.
        
               | oldgradstudent wrote:
               | > That's only because you understand the algorithm in the
               | first example.
               | 
               | I can start with van Gogh's Starry Night, get the prompt
               | "starry night, van Gogh" and get Starry Night back.
               | 
               | I'm using Starry Night as an example because Stable
               | Diffusion consistently reproduces it, sometimes with the
               | original frame, even with vaguely related prompts.
               | 
               | I'd say the jury will be making the right decision.
               | Especially if the original image was part of the training
               | set.
        
               | joxel wrote:
               | This is a great example of how dumb our legal system is.
        
         | belorn wrote:
         | With cases like this I am reminded by the pirate bay case in
         | that there are two ways people can be found guilty of copyright
         | infringement. One can prove that a copy has been made, or one
         | can convince a judge that the opposite story provided by the
         | defendant is not believable.
         | 
         | After that case there has been multiple theories on how to
         | evade copyright law which all seem like they would equally fail
         | at convincing a judge. One of my favorite is the method used by
         | freenet, which takes a file and first encrypts it and then
         | splits it into so small parts. Those parts are so small that
         | multiple files will share identical parts with each other, so
         | it is impossible to know for sure which file a person is
         | downloading by just looking at the parts. In a different
         | channel they also provide a recipe in how to reconstruct the
         | file, and recipe by themselves are not enough as evidence to
         | prove a download.
         | 
         | Sounds perfect until one would have to try convince a judge
         | that no copying has occurred.
        
         | dahart wrote:
         | I was also tempted to quote the expression/idea dichotomy, but
         | looking at the examples, you're totally right: the example you
         | showed is aiming to land directly on the line between legal and
         | illegal, it is legitimately hard to reason about, it is
         | different than borrowing either the idea or the expression, and
         | it absolutely is sketchy (and will probably be tested in court
         | if it gets much more attention).
         | 
         | The problem here is that it does _more_ than just clone the
         | idea /concept/vibes, it really does tread into copying the
         | implementation details. It matches lighting & composition, it
         | matches subject and color, it can mimic the equipment used &
         | props. People have done this manually, and been sued for it.
         | Mostly it happens when an unknown artist steals the style of a
         | specific well-known, best-selling artist. But now we've built a
         | machine to near-copy anything in any style, with the intent of
         | borrowing as much of the expression as legally possible, which
         | seems like it probably can't end well from a legal perspective.
         | And because the technology for building these kind of machines
         | is essentially public knowledge now, it's hard to imagine this
         | won't be a problem from now on.
        
           | toss1 wrote:
           | Yup. Or even more clearly with example [0]. Input request for
           | an image of a person named "Ann Graham Lotz", it returned the
           | exact image in the training set, slightly degraded.
           | 
           | MIT Tech Review reports research with hundreds of similar
           | results [1]. "The researchers, from Google, DeepMind, UC
           | Berkeley, ETH Zurich, and Princeton, got their results by
           | prompting Stable Diffusion and Google's Imagen with captions
           | for images, such as a person's name, many times. Then they
           | analyzed whether any of the images they generated matched
           | original images in the model's database. The group managed to
           | extract over 100 replicas of images in the AI's training set.
           | "
           | 
           | [0] https://news.yahoo.com/researchers-prove-ai-art-
           | generators-2...
           | 
           | [1] https://www.technologyreview.com/2023/02/03/1067786/ai-
           | model...
        
         | bee_rider wrote:
         | Not a lawyer but I'm under the impression that one of the
         | things they do in law school is spend a bunch of time
         | constructing increasingly ridiculous hypotheticals, to work out
         | the specifics of their arguments.
         | 
         | I think AI is overhyped in general, but as a tool to rapidly
         | instantiate absurd hypotheticals it is really impressive. This
         | is cool and good, IMO.
        
         | visarga wrote:
         | I thought copyright covers just expression, not the ideas. If
         | the model replicates the idea in a different way why should
         | there be an infringement?
        
           | rhizome wrote:
           | The "idea" at issue isn't like "a picture of a tree," though.
           | It's "a picture of a tree as conceived by Ansel Adams and
           | photographed with [technical details]."
        
             | dxbydt wrote:
             | "a picture of One WTC as conceived by Ansel Adams and
             | photographed with a Hasselblad" is still an idea, because
             | One WTC opened in 2014 & Mr. Adams died thirty years ago in
             | 1984.
        
         | jcranmer wrote:
         | Not a lawyer, but given that it takes the original picture as
         | an input, there's a very strong claim that it's a tool to
         | create a derivative work. At which point, it's time to ask the
         | question if this is going to be fair use...
         | 
         | Okay, when you're _advertising_ your product as a  "get around
         | copyright", you're going to so torpedo your credibility before
         | the judge that there's no point trying to analyze the fair use
         | factors--the judge is going to do whatever it takes to make
         | them come out not in your favor.
        
           | rhizome wrote:
           | That tagline certainly seems like it's gonna make their
           | "preponderance of evidence" an uphill battle!
        
           | shagie wrote:
           | > Not a lawyer, but given that it takes the original picture
           | as an input, there's a very strong claim that it's a tool to
           | create a derivative work.
           | 
           | If I take an image and then create a thumbnail of it would
           | that be an infringing derivative work?
           | 
           | https://www.eff.org/deeplinks/2006/02/perfect-10-v-google-
           | mo... and https://web.archive.org/web/20060813093554/https://
           | www.eff.o...
        
             | jcranmer wrote:
             | Thumbnails are _absolutely_ infringing... but they are fair
             | use in most cases!
             | 
             | Edit: on second thought, they might not even be derivative
             | works (as a derivative work requires the same spark of
             | creativity necessary for a copyrightable work), just
             | outright reproduction. But the point that they are fair use
             | still stands.
        
             | wizzwizz4 wrote:
             | It would unambiguously be a _derivative_ work. (IANAL, but
             | I 'm willing to assert this.) I wouldn't think it would be
             | infringing: I'm not particularly familiar with US law, but
             | it seems like a classic case of fair use.
             | 
             | Note: the case you linked was appealed, and that ruling was
             | reversed.
             | https://www.eff.org/deeplinks/2007/05/p10-v-google-public-
             | in...
        
               | shagie wrote:
               | Yep... and from the link:
               | 
               | > Fortunately, the Court wasn't buying it. It rejected
               | Perfect 10's theory and found that until Perfect 10 gave
               | Google actual knowledge of specific infringements (e.g.
               | specific URLs for infringing images), Google had no duty
               | to act and could not be liable. It also held that Google
               | could not "supervise or control" the third-party websites
               | linked to from its search results, something most people
               | (except apparently Perfect 10) probably already knew. The
               | rule provides strong guidelines for future development
               | and avoids the kind of uncertainty that could chill
               | start-ups trying to get the next great innovation off the
               | ground.
        
         | [deleted]
        
         | kmeisthax wrote:
         | In the law you don't have "implementation details", you have
         | ideas and expressions. Ideas are patentable, expressions are
         | copyrightable, and _never the two shall meet_. The exact
         | boundary of those two things is defined by the merger doctrine
         | and the thin copyright doctrine[0]; but for our purposes we
         | just need to know that if your AI art generator spits out
         | something that 's "substantially similar" to the original, it's
         | infringing, even if it's not exactly the same. The law
         | anticipated tracing.
         | 
         | What you are proposing is that you just "wash off" the
         | expression from the idea and regenerate a new image from that
         | idea. Great, except this isn't how AI art generators work. They
         | aren't breaking down images into their core ideas, _because
         | those only exist in our human minds_ [1]. They're finding
         | patterns of pixels that happen to match the text prompt well
         | enough; and often times that includes _the original image
         | itself_. Overfitting is a huge problem with conditional U-Net
         | models and Google even released a paper detailing a way to find
         | and extract memorized images out of an art generator.
         | 
         | So what will likely happen is that the art generator will just
         | copy the image, or make one that's close enough that a judge
         | would say that that it's a copy.
         | 
         | [0] If an expression is fundamentally wrapped up in an
         | uncopyrightable idea and can't be expressed any other way, then
         | it's also uncopyrightable. But if an expression is made up of
         | uncopyrightable ideas, but separable from them, then you get a
         | thin copyright on the arrangement of such.
         | 
         | [1] And, also, most humans are terrible at distinguishing idea
         | and expression in the way that copyright law demands.
        
         | prpl wrote:
         | IANAL but you can't copyright a concept.
        
           | ROTMetro wrote:
           | But you do have copyright claims to derivative works.
        
             | prpl wrote:
             | I think that really depends. You can describe the photo
             | conceptually and build a new photo from that, which would
             | be equivalent to clean room engineering.
        
               | rhizome wrote:
               | But "you" (or I) _don 't_ describe the photo, their
               | process does, and I'm not confident that Separation of
               | Concerns would be enough to establish a clean room.
        
               | jjeaff wrote:
               | But a derivative work requires that major copyrightable
               | elements remain. So it seems like there is a fuzzy line
               | on whether something is really derivative or not.
               | 
               | I don't think it is necessarily enough to have simply
               | started with a copyrighted work.
        
               | prpl wrote:
               | I think that depends on how it's done. If there's actual
               | visual representation ( bitmaps or FFT coefficients)
               | copied around (and most importantly - more than what
               | might be described as fair use), that would probably be
               | true. If a highly accurate conceptual description is
               | generated then an image generated on that, I would see no
               | issue.
               | 
               | I don't know how it is implemented for the software in
               | question.
        
         | charcircuit wrote:
         | It's clearly a derivative work of the original.
         | 
         | >On one hand, nothing is stopping me from seeing the
         | copyrighted photo and then recruiting a similar looking model,
         | setting up a photoshoot in a field with some stuffed crows,
         | etc. I could replicate what the AI is doing. It would be work,
         | but I could do it. The AI is just automating this.
         | 
         | If you trace art by hand that is still copyright infringement.
         | If you paraphrase a passage from a book by hand that is still
         | copyright infringement.
        
           | gamegoblin wrote:
           | > If you trace art by hand that is still copyright
           | infringement. If you paraphrase a passage from a book by hand
           | that is still copyright infringement.
           | 
           | What if I see some fine art, I, a non-artist, make a super-
           | low quality recreation of it with crayons, give that + a
           | verbal description to a different professional artist who has
           | not seen the original, and have them "upscale" my bad drawing
           | into new fine art.
           | 
           | Their art would be conceptually very similar to the original.
           | Same layout, same concept, same vibes, same style (if my
           | verbal description was sufficiently good) but all the details
           | would be different. Is this still infringement?
        
             | jcranmer wrote:
             | It's a derivative work of a derivative work, so yes, it's
             | still infringement.
        
             | visarga wrote:
             | If this worked all artists would be passible for lawsuits,
             | how many ways can you draw flowers in a vase? "They stole
             | my idea, your honour! They used the same number of flowers
             | in a vase, I came up with the concept of 3 flowers first."
             | 
             | I think artists and copyright intermediaries would like to
             | have "wildcard" copyright, "draw a flower once, all flowers
             | belong to you now", and it would be very bad for creativity
             | if they got their way.
        
               | XorNot wrote:
               | The vast majority of profit most artists are making is
               | also copyright infringement. Custom porn is where the
               | money is, and Rule 34 is likely a huge part of that.
        
           | robertlagrant wrote:
           | > If you trace art by hand that is still copyright
           | infringement. If you paraphrase a passage from a book by hand
           | that is still copyright infringement.
           | 
           | I don't understand how this could work. Are there any
           | examples out there you can cite?
        
           | dahart wrote:
           | > If you paraphrase a passage from a book by hand that is
           | still copyright infringement.
           | 
           | If you have case law examples, it would be useful to cite
           | them, but in general this not true. It can true when the
           | paraphrasing is substantially similar to the original work.
           | It would not be true, for example, if you paraphrase using
           | all different words and a lot less of them. Copyright only
           | protects the fixed tangible expression of the work, not the
           | idea behind it.
        
             | charcircuit wrote:
             | >Copyright only protects the fixed tangible expression of
             | the work
             | 
             | It also protects against people making modifications to
             | these works. When people parphrase something they typically
             | do so by taking the original work, swapping words with
             | synonyms, and shuffling the order.
        
               | dahart wrote:
               | Yeah, that's true. I think this might hinge on the word
               | 'paraphrasing' though. That word generally means
               | summarizing using your own words, not playing mad libs on
               | the original text.
        
         | ghaff wrote:
         | IANAL but the Obama Hope poster lawsuit is probably somewhat
         | related. https://www.law.columbia.edu/news/archive/obama-hope-
         | poster-...
         | 
         | However, the case was settled and the creator of the poster
         | lied in court about his sources--so I'm not sure I'd draw _too_
         | much about the poster inspired by photograph.
        
         | seizethecheese wrote:
         | Corollary question: what if this took two images as input? It
         | couldn't ever then be purely derivative of one image.
        
           | YurgenJurgensen wrote:
           | A picture of Mickey Mouse dressed as Superman is still a
           | derivative work. The main difference is that the number of
           | lawyers you anger is now doubled.
        
         | [deleted]
        
       | echelon wrote:
       | Getty is about to no longer exist. Good riddance.
       | 
       | They're selling knitting needles in the era of robotic
       | manufacturing.
       | 
       | I hope every single one of these lawsuits falls flat on its face.
       | Other countries will happily overlook Getty copyright to get the
       | leg up on AI.
       | 
       | AI is not reusing copyrighted material. It's learning from it in
       | the same way humans do. You can even fine tune away from the base
       | training set and wash any experience of it away.
       | 
       | Besides, if Getty wins, it merely insures that the large
       | incumbents with massive pocketbooks to pay off Getty et. al. win.
       | It'll keep AI out of the hands of the rest of us.
        
         | dopa42365 wrote:
         | What are you even talking about? They're an agency for
         | photographers and like every news website on the planet
         | licenses their pictures (same as with images from AP, Reuters,
         | and other agencies).
         | 
         | Getty is slightly more than just a website that posts low
         | resolution, low quality pictures with a fat watermark on it.
         | 
         | Has nothing to do with robots or AI...
        
         | [deleted]
        
         | toss1 wrote:
         | >>AI is not reusing copyrighted material.
         | 
         | Yeah...no. AI is doing nothing but reusing material. It
         | generates the most likely image/text/code in its training set
         | to be found following/around/correlating with the prompt. It
         | literally has nothing outside it's training set to reproduce.
         | And when it reproduces the Getty watermark, that's pretty
         | obvious example of reusing copyrighted material.
         | 
         | >>It's learning from it in the same way humans do.
         | 
         | Not even close. These "AI" architectures may be sufficiently
         | effective to produce useful output, but they are nothing like
         | human intelligence. Not only is their architecture vastly
         | different and making no attempt to reproduce/reimplement the
         | neuron/synapse/neurotransmitter and
         | sensory/brainstem/midbrain/cerebrum micro- and macro-
         | architectures underlying human learning, the output both in the
         | good and the errors is nothing resembling human learning.
         | (source: just off-the-top-of-my-head recollections from
         | neuroscience minor in college)
         | 
         | Yikes.
        
           | munchler wrote:
           | > It generates the most likely image/text/code in its
           | training set to be found following/around/correlating with
           | the prompt.
           | 
           | This is simply false. It's not a search engine that outputs
           | the training item closest to the prompt.
           | 
           | In reality, it is "learning" (in some sense) how to correlate
           | text to images, and then generating brand new images in
           | response to input text. If this is legal for humans to do,
           | then it's probably legal for machines to do the same thing.
        
             | toss1 wrote:
             | Perhaps I was not clear enough to prevent ambiguity.
             | 
             | >>It's not a search engine that outputs the training item
             | closest to the prompt.
             | 
             | Correct, it is not outputting the training ITEM, it is
             | outputting finer-grained slices of many items, more of a
             | mash-up of the training items.
             | 
             | Of course it is not taking an entire specific image the
             | closely matches the search term, it is taking averages of
             | component images of "astronaut riding a horse over the moon
             | in style of Rembrandt".
             | 
             | That image won't exist in the training set, but astronauts,
             | horses, and Rembrandt-style coloring and shading do exist,
             | and it is assembling those from averages of the components
             | found it's training set, not from some abstract imagination
             | or understanding.
             | 
             | The fact that the astronaut suit may not be the exact same
             | as any of it's training images is the same as if I averaged
             | 100 faces in photoshop, not because there is some kind of
             | "learning" or "understanding". Ability to do useful
             | statistical mashups is NOT the same as "learning".
             | 
             | This can be shown in a different "AI" engine'd failure to
             | solve a child's puzzle. ChatGPT, when presented with:
             | "Mike's mom had four kids, three are named Lucia, Drake,
             | and Kelly, what is the fourth kid's name?". It said there
             | is insufficient info, and doubled down when told that the
             | answer is in the question.
             | 
             | >>how to correlate text to images
             | 
             | yes, as I pointed out, "correlating with the prompt." I
             | didn't say it correlated an entire image, but I also failed
             | to specify that it was correlating components.
             | 
             | >> If this is legal for humans to do, then it's probably
             | legal for machines to do the same thing.
             | 
             | This [0] is I'm quite sure, not legal. Asked for an image
             | of a person named "Ann Graham Lotz", it returned the image
             | in the training set, slightly degraded.
             | 
             | First, that is literally the search engine functionality
             | you were deriding.
             | 
             | Second, if you asked a human artist to produce the same
             | image, without infringing copyright, they would produce
             | something likely recognizable as the person, but obviously
             | not resembling the training photo. It doesn't matter if
             | they are a portrait painter, sketch artist, Photoshop
             | jockey, or Picasso-like impressionist.
             | 
             | So, no, this does not represent learning in any conceptual,
             | creative, or human-like sense.
             | 
             | It does represent mashing-up averages of inputs of various
             | components. Feed in enough "astronaut" photos, and it'll be
             | able to select out the humans in the spacesuit as the
             | response to that prompt. Same for "horse", "moon",
             | "riding", and "Rembrandt". and it can mash them together
             | into something useful with good prompts.
             | 
             | But give it something very specific, like a person's name,
             | and you get basically a search-engine result, because it
             | doesn't have enough input data variety to abstract out the
             | person 'object' from the background.
             | 
             | [0] https://techxplore.com/news/2023-02-ai-based-image-
             | generatio...
        
               | d110af5ccf wrote:
               | to me the "search engine" case where it reproduces a
               | specific training image seems like a failure mode that's
               | distinct from normal operation
               | 
               | > it is assembling those from averages of the components
               | found it's training set, not from some abstract
               | imagination or understanding
               | 
               | how exactly are you so certain that the human brain
               | handles abstract concepts any differently? please note
               | that I'm not claiming that I myself know, but rather that
               | you almost certainly do not know and thus are presenting
               | an invalid argument
               | 
               | what is human imagination anyway?
               | 
               | > assembling those from averages of the components found
               | it's training set
               | 
               | > slices of many items, more of a mash-up of the training
               | items
               | 
               | > But give it something very specific ... it doesn't have
               | enough input data variety to abstract out the person
               | 'object' from the background
               | 
               | so is it abstracting or not? where's the line between
               | that and a mere statistical mashup?
        
               | toss1 wrote:
               | >>how exactly are you so certain that the human brain
               | handles abstract concepts any differently?
               | 
               | Good question. At the very least, we have a far deeper
               | understanding of physical reality. Humans would not
               | unintentionally (e.g., for effect) produce images of
               | people with three ears, or of a bikini-clad girl seated
               | on a boat with her head and torso facing us, and also her
               | butt somehow facing us and thighs/knees away... yet I've
               | seen both of these in the last week (sorry, couldn't find
               | the reference, it was a hilarious image, looked great for
               | 2sec until you saw it)
               | 
               | I admit that it is possible (tho I think unlikely) that
               | this is a difference in quantity, not in kind.
               | 
               | One reason to doubt this is that Stable Diffusion was
               | trained on 2.3 billion images. This is a vastly larger
               | library than any human has seen in their lifetime
               | (considering that viewing 2.3 billion images at one per
               | second would take 72.8 years). Yet even if you count
               | every second of eyesight as 'training', children under
               | 1/10 of that age, who have seen only 10% of those images
               | would not make the same kinds of mistakes.
               | 
               | Plus, the neuron/synapse/neurotransmitter and
               | brainstem/midbrain/cerebellum micro & macro-architectures
               | are vastly different than the computer training models.
               | So, I think we can be confident that something different
               | is happening.
               | 
               | >>so is it abstracting or not? where's the line between
               | that and a mere statistical mashup?
               | 
               | Good question. There is definitely something we might
               | call, or that might resemble abstraction. It's definitely
               | able to associate the cutout images of an astronaut in a
               | spacesuit from the backgrounds. It can evidently assemble
               | those from different angles.
               | 
               | But it certainly does not have the abstraction to
               | understand even the correct relationship between the
               | parts of a human. E.g., it seems to keep astronauts'
               | parts in the right relationship, but not bikini-clad-
               | girls' parts (because of the variety of positions in the
               | dataset?). There's no understanding of kinesiology,
               | anatomy, or anything else that an actual artist would
               | have.
               | 
               | Could this be trained in? I expect so, but I think it
               | would require multiple engines, not merely six orders of
               | magnitude more training of the same type. Even if 10^6X
               | more training eliminated these error types and even
               | performed better than humans, I'm not sure it would be
               | the same, just different and useful.
               | 
               | I'd want to see evidence that it was not merely cut-
               | pasting components of images in useful ways, but
               | generating it from an understanding of the sub-sub
               | components: "the thigh bone connects to the hip bone, the
               | hip can rotate this far but not that far, the center of
               | mass is supported...+++" as an artist builds up their
               | images. Good artists study anatomy. These "AI"s haven't a
               | clue that it exists.
               | 
               | >>to me the "search engine" case where it reproduces a
               | specific training image seems like a failure mode that's
               | distinct from normal operation
               | 
               | Au contraire, it seems that this merely exposes the
               | normal operation. Insufficient images of that person
               | prevented it from abstracting the person components from
               | the background, so it just returned the whole thing. IDK
               | whether it would take a dozen, hundred, or thousand more
               | images of the same person, to work properly. But, if they
               | all had some object in the background (e.g., a lamp) that
               | was the same, the "AI" would include it in their
               | abstraction.
               | 
               | (but I could be wrong).
        
             | oldgradstudent wrote:
             | > In reality, it is "learning" (in some sense)
             | 
             | Only because some people named their field "machine
             | learning" and called it "learning".
             | 
             | It has no relation to human learning.
             | 
             | If your child accidently confuses a giraffe with some other
             | animal you correct then, you don't add the picture to their
             | training set and show them again thousands of pictures of
             | giraffes hoping that their success rate improves.
             | 
             | If you ask Stable Diffusion for a starry night, you get van
             | Gogh's starry night.
        
         | emkoemko wrote:
         | yea lets hope it kills art so people find real jobs, until
         | those also get taken over by AI
        
           | echelon wrote:
           | Photographers still exist and will continue to exist.
           | 
           | Wedding photographers make a lot of money. Sports, events,
           | local artists... There's plenty of money in photography.
        
           | SQueeeeeL wrote:
           | I, for one, look forward to the societal collapse that will
           | occur once countries without meaningful social aid suddenly
           | have no work for anyone to be paid to do. It'll be a pretty
           | fun few years while we see all the chaos play out
        
             | TigeriusKirk wrote:
             | The denial that it's happening will cause the chaos to drag
             | out longer.
        
               | SQueeeeeL wrote:
               | That sounds pretty cool, maybe the world will do more
               | silly things, like the country with the largest military
               | force electing another game show host to it's highest
               | office
        
             | jrockway wrote:
             | Why do we even need humans after the AI uprising? They
             | don't do anything and they are expensive to produce food
             | for; an AI can power itself with the blowing of a breeze.
             | We have to grow plants, have animals eat them, kill the
             | animals, and then eat those. Then the parts we don't digest
             | have to be taken away. Too expensive to keep around, even
             | as pets! Humans can tolerate cats and dogs because they
             | piggyback on infrastructure we made for ourselves (crops
             | and trash collection). AIs don't need crops or trash
             | collection, so it will be a harder sell to keep humans as
             | pets.
             | 
             | All this science fiction is right; there isn't room on the
             | planet for both humans and AIs. It sounds depressing that a
             | bunch of GPUs are going to kill us all off, but it was
             | coming anyway. The sun becomes a red giant and consumes the
             | Earth. All protons in the Universe decay in 10^17 years,
             | ending the existence of matter. The trajectory is clear
             | even if the means aren't; humanity can't last forever.
             | 
             | If I sound depressed, I'm not really. People just use the
             | headlines to guide their view on what The End looks like.
             | Read a few articles about chatbots, and it's AIs taking all
             | our jobs. Watch a few movies about asteroids, we go out
             | like the dinosaurs. Hear "Russia invades Ukraine" and it's
             | a nuclear holocaust. Read a few particle physics papers,
             | and it's proton decay. You can't worry too much about it.
             | Enjoy your time while you have it!
        
               | visarga wrote:
               | Please do explain to me how the blowing of a breeze helps
               | AI produce the latest TPU V5 or NVIDIA H100 chips to run
               | on. The technological stack behind AI is enormous, and
               | humans are necessary cogs.
               | 
               | On the other hand humans are self replicators and only
               | need a bit of biomass for sustenance, biomass that grows
               | by itself, too. No factory, no supply chain, we got
               | everything we need to make more of us.
               | 
               | If you consider the risk of EMP, an AI needs humans to
               | restart it, or some way to survive electronic attacks.
        
               | d110af5ccf wrote:
               | > humans are necessary cogs
               | 
               | for now
               | 
               | > If you consider the risk of EMP
               | 
               | if you consider the risk of a bioweapon ...
        
             | visarga wrote:
             | > look forward to the societal collapse that will occur
             | once countries without meaningful social aid suddenly have
             | no work for anyone to be paid to do
             | 
             | This assumes 1. automation is free, 2. humans cost too
             | much, so any company would ditch their humans for AI. But
             | in reality AI costs money, AI is better with people than
             | without, and people can generate profits surpassing the
             | cost of their wages. Why would a company prefer to reduce
             | costs to increasing profits? When everyone has AI, humans
             | are the differentiating factor.
        
             | neovialogistics wrote:
             | There's the risk, depending on what "the chaos" entails,
             | that the capital owners in the countries with social safety
             | nets will look at the outcomes for the countries without
             | and decide that alignment is preferable, at which point
             | enforcing the obsolescence of the proletariat and freeing
             | up gigatons of biomass for other applications (in the
             | Bataillean sense) becomes a mere sociopolitical engineering
             | problem.
        
         | mrkeen wrote:
         | > AI is not reusing copyrighted material. It's learning from it
         | in the same way humans do.
         | 
         | What will AI learn from if Getty no longer exists?
        
           | echelon wrote:
           | Lots of businesses fail when new disruptive technology
           | arrives. We don't need to prop up the old at the expense of
           | the new.
           | 
           | This is like crying over Rolodex.
           | 
           | And let's not forget how awful Getty has been throughout its
           | existence. They've frequently sued people for things they
           | didn't even own the copyright to.
        
             | TillE wrote:
             | That doesn't answer the question. In the hypothetical
             | future where such companies no longer exist, where is the
             | new AI training data supposed to come from?
        
               | yellow_postit wrote:
               | How is this not a bootstrapping problem and after a
               | certain point they can train on their own output?
               | 
               | Novelty of subject will likely get covered by partially
               | unwitting data gatherers (eg google photos)
        
             | mrkeen wrote:
             | > They're selling knitting needles in the era of robotic
             | manufacturing.
             | 
             | They're teachers in the age of students.
             | 
             | > We don't need to prop up the old at the expense of the
             | new.
             | 
             | We don't need to pay teachers when we can just profit by
             | charging students tuition.
        
               | newswasboring wrote:
               | > They're teachers in the age of students.
               | 
               | This is a very good quote, but unfortunately I fail to
               | grasp it. Care to elaborate?
        
           | GaggiX wrote:
           | If in the future AI images are as good as Getty's, will new
           | AIs really need to learn from Getty?
        
             | munchler wrote:
             | Would you want an AI trained on images from no later than,
             | say, 1973? No? People 50 years from now will feel the same
             | way. Without new images to learn from, I don't see how
             | future AI's of this kind could know anything about their
             | own era.
        
               | GaggiX wrote:
               | But the models are already here, you can distill their
               | knowledge if you want to train a new one, human feedback
               | could also improve model performance without adding
               | training data.
               | 
               | >Without new images to learn from
               | 
               | Even in the event that all cameras are destroyed
               | (somehow), people will use generative models to describe
               | their experience in a new era, and then this new
               | knowledge will be used by new models and the cycle will
               | repeat itself.
        
         | quonn wrote:
         | That depends on the AI - a poorly trained one can overfit
         | extremely and that is then equivalent to copying. Furthermore
         | the existing laws were made for typical human capabilities and
         | the kind of remembering some models do is very much beyond
         | those capabilities (far better recall).
        
       | th4tg41 wrote:
       | unrelated rant: this is so stupid in itself. people that
       | (sometines) make money of unpaid work are sueing people that
       | (sometimes) make money of unpaid work. and it's all about the
       | (sometimes) and the unpaid work. it's dramedy. in a lawsuit.
        
         | [deleted]
        
         | [deleted]
        
       | arbuge wrote:
       | IANAL but if I were to guess, I would think the defense here
       | would center on how would what Stability AI has done differ in
       | any meaningful way from human artists browsing through Getty's
       | collection themselves, to train their human brains on what art to
       | generate. The latter activity, even Getty would likely have to
       | agree, is surely legal.
        
         | LegitShady wrote:
         | It would be a weak defense, undone because Stability AI is not
         | a human and will never be one. It's a computer based tool using
         | assets whose license explicitly say not to use them that way.
        
           | arbuge wrote:
           | > license explicitly say not to use them that way.
           | 
           | Does it?
           | 
           | I mean, it probably does now, but did it say that at the time
           | this training of Stability AI's model was going on? Did Getty
           | have that foresight?
        
         | superfrank wrote:
         | IANAL either, but I believe for something to be copyrightable
         | (and not an infringement on someone else's copyright), there
         | needs to be a "modicum of creativity" in the new work. It makes
         | me wonder if at least part of the case will depend on whether
         | the court things an AI can be creative.
        
           | arbuge wrote:
           | I don't see how you could reasonably decide AI is not
           | creative, period, unless you arbitrarily restrict the word
           | creative to mean human-generated.
        
         | jagged-chisel wrote:
         | > ... likely have to agree ...
         | 
         | Also NAL, but I'm cynical enough to believe that Getty's
         | lawyers would avoid answering this question directly. And then
         | wax lyrical about how their client _should_ indeed receive a
         | royalty for anyone attempting to use Getty 's copyrighted works
         | to learn the art.
        
       | Xeoncross wrote:
       | Getty is known for "brazen infringement of public IP on a
       | staggering scale"
       | 
       | - https://petapixel.com/2016/11/22/1-billion-getty-images-laws...
       | 
       | - https://www.techdirt.com/2019/04/01/getty-images-sued-yet-ag...
       | 
       | - https://en.wikipedia.org/wiki/Getty_Images#Copyright_enforce...
        
         | [deleted]
        
         | aardvarkr wrote:
         | Those links are interesting but just show that Getty is a slimy
         | business that tries to repackage public domain images for sale,
         | not that they infringe the IP of others. That's a massively
         | different issue.
        
           | Groxx wrote:
           | To more accurately summarize the links: they've tried to sue
           | people for using public domain images _that Getty is also
           | selling_.
           | 
           | Selling public domain things is entirely legal. Claiming they
           | _own_ those images and suing others for using them is not -
           | they 're public domain.
        
             | Natsu wrote:
             | You'd think that should fall under the perjury penalty of
             | the DMCA, I thought that misrepresenting oneself as the
             | copyright holder was what that was for. But maybe they
             | didn't file any such DMCA notice, I dunno.
        
               | Scaevolus wrote:
               | Nobody has ever been prosecuted for perjury under DMCA.
               | https://law.stackexchange.com/questions/51541/has-anyone-
               | bee...
        
           | ronsor wrote:
           | They do sue others for use of those public domain images,
           | which while not copyright infringement, is fraudulent and
           | illegal.
        
         | mmwako wrote:
         | Just read the first article. Getty Images was sued by a
         | photographer for using 18.000 her public domain images for
         | profit by Getty. The ruling DISMISSED the allegation, which is
         | crazy. It's comical that now it's exactly the same allegation,
         | but with the sides inverted, now it's Getty trying to sue an AI
         | company for using their public domain images. I think we can
         | all guess what's going to happen xD
        
           | rvz wrote:
           | > I think we can all guess what's going to happen xD
           | 
           | The outcome will be, Stable Diffusion settling and licensing
           | the images from Getty Images. If OpenAI was able to do it
           | with Shutter-stock, so can Stable Diffusion.
        
             | LegitShady wrote:
             | I don't think thats an acceptable outcome to getty, whose
             | entire business model will be confounded by a tech that
             | used its images against license to generate alternatives to
             | getty's business.
        
             | 300bps wrote:
             | I don't think that outcome is assured. In a lot of ways,
             | Stable Diffusion's business model creates an existential
             | threat to Getty Images. I would expect alternative outcomes
             | to be:
             | 
             | 1. The requested licensing fee approaches infinity
             | 
             | 2. Getty Images simply refuses to license images to anyone
             | who will use AI to create derivative works
        
           | nucleardog wrote:
           | > Getty Images was sued by a photographer for using 18.000
           | her public domain images for profit by Getty.
           | 
           | You can use public domain images for profit. It's not
           | surprising this was thrown out.
           | 
           | > It's comical that now it's exactly the same allegation, but
           | with the sides inverted, now it's Getty trying to sue an AI
           | company for using their public domain images.
           | 
           | Where does it say they're suing over the public domain images
           | in their collection? Their collection is not entirely public
           | domain images. Their suit claims for the copyright works by
           | staff photographers, third parties that have assigned
           | copyright, and images licensed to them by contributing
           | photographers. In addition, they're claiming for the titles
           | and captions which they created and are themselves
           | copyrighted.
           | 
           | It's not the "exact same allegation", and there's really no
           | relation between the facts of the cases here.
        
       | robryan wrote:
       | This seems fair enough. There is pretty big value in it avoids
       | potentially millions of pictures needing to be taken to train the
       | AI on, they should pay something for that.
        
       | aquinas_ wrote:
       | Whoever wins, we lose.
        
       | joxel wrote:
       | But they don't have obnoxious and stupid watermarks on them, I
       | thought that's what made a Getty a Getty
        
         | 6nf wrote:
         | Scroll through the complaint - the AI results reproduces the
         | Getty watermark fairly well :)
        
         | jahewson wrote:
         | Oh but they do https://www.theverge.com/2023/1/17/23558516/ai-
         | art-copyright...
        
       | LiquidPolymer wrote:
       | I've been a pro photographer for over 30 years and Getty has
       | stolen my work and countless others. I was one of a coalition of
       | photographers who pooled resources and won substantial
       | damages.(my work was all registered with the library of congress
       | which raises the liability cost for infringement)
       | 
       | Getty's strategy at the time appeared to be to meet any
       | infringement accusations with a massive legal response. Any
       | individual photographer could generally not afford to respond.
       | 
       | My impression was that they were not super concerned about
       | infringing on other's work. But they will sue the pants off
       | anyone who they perceived to be violating their copyrights.
       | 
       | But in recent years there are legal firms dedicated to pursuing
       | deep pocket infringement cases on contingency. This has changed
       | the legal calculus for large companies who were not careful with
       | copyright.
        
         | wombat_trouble wrote:
         | If Getty wins this one, you win as a photographer. If they
         | lose, you lose. They might be a shitty company, but in this
         | instance, your interests align...
        
           | Ukv wrote:
           | In the event that Getty Images wins, it seems most likely
           | that AI researchers would pay Shutterstock/Getty Images for
           | their large existing catalogs of images. With the companies
           | having a stronger position (getting to act as a gatekeeper to
           | this kind of machine learning) and artists still a weaker
           | one, I wouldn't hold out hope for them passing anything on.
        
           | ronsor wrote:
           | That assumes parent cares about AI training, which they may
           | not. If you wanted to release your work freely, Getty could
           | still get in the way.
        
             | LegitShady wrote:
             | why would anyone pay for their photography if an ai can
             | generate it for pennies on the dollar?
        
         | blantonl wrote:
         | _Getty's strategy at the time appeared to be to meet any
         | infringement accusations with a massive legal response._
         | 
         | I'm surprised at this in your case because typically copyright
         | infringement cases are massively weighted in favor of the
         | copyright owner (defined damages AND reimbursement of legal
         | fees). I know this because I'm currently a defendant in a
         | copyright lawsuit.
        
           | scotty79 wrote:
           | I think it mostly depends on the size of the pockets of both
           | sides.
        
         | [deleted]
        
           | [deleted]
        
       | dang wrote:
       | Recent and related:
       | 
       |  _Getty Images is suing the creators of Stable Diffusion_ -
       | https://news.ycombinator.com/item?id=34411187 - Jan 2023 (83
       | comments)
       | 
       | Others?
        
       | sbdaman wrote:
       | The reproduced watermarks are hilarious.
        
         | [deleted]
        
         | moffkalast wrote:
         | They're all like _gett[?]y[?] i[?]mag[?]es_ haha
        
           | suyash wrote:
           | how did you do that, so cool :)
        
             | moffkalast wrote:
             | It's the Zalgo text generator:
             | https://lingojam.com/ZalgoText
             | 
             | The earliest I heard of it was the "parsing html with
             | regular expressions" classic which I highly recommend if
             | you haven't seen it before:
             | https://stackoverflow.com/a/1732454/4012132
             | 
             | I also like XNian Gun Kao  one:
             | https://lingojam.com/VaporwaveTextGenerator
             | 
             | It's all just clever use of unicode.
        
       | m00dy wrote:
       | Don't worry guys, My decentralised AI network is almost ready. No
       | more copyright bullshit.
        
       | mmastrac wrote:
       | The arguments in this filing are pretty weak and I think it's
       | going to all just boil down to fair use in the end. I don't see
       | this trademark claim going anywhere.
        
         | munchler wrote:
         | It's mostly a copyright claim, though. The trademark claim is
         | just the ironic cherry on top.
        
         | rvz wrote:
         | > The arguments in this filing are pretty weak and I think it's
         | going to all just boil down to fair use in the end.
         | 
         | I don't think so and the complaint isn't just about
         | 'trademarks' either.
         | 
         | OpenAI was able to get explicit permission [0] from
         | Shutterstock to train and on their images for DALLE-2. Stable
         | Diffusion did not and is commercializing the use the model with
         | Dreamstudio as a SaaS offering which the model has found to be
         | outputting images with Getty's watermark [1] without their
         | permission. That doesn't seem to be _' fair use'_ nor is it
         | _transformative_ either given the watermark is clearly visible
         | in the generated examples here: [1]
         | 
         | This is going to end with a settlement and Stable Diffusion
         | licensing deal with Getty over the images; just like with
         | OpenAI did for DALL-E 2 with Shutterstock. Neither Shutterstock
         | or Getty are against Generative AI either even as shown in this
         | deal with Getty recently [2]
         | 
         | [0] https://www.prnewswire.com/news-releases/shutterstock-
         | partne...
         | 
         | [1] https://www.theverge.com/2023/1/17/23558516/ai-art-
         | copyright...
         | 
         | [2] https://newsroom.gettyimages.com/en/getty-images/bria-
         | partne...
        
           | brycedriesenga wrote:
           | I can recreate a Getty watermark in Photoshop as well. Should
           | Photoshop be held liable if I do that without Getty's
           | permission?
        
       | xbar wrote:
       | This is the one I have been waiting for. Getty needs to force the
       | question.
        
         | nemo44x wrote:
         | I agree. There's a fantastic debate ahead that's very novel.
         | It's especially exciting for how sudden this all is. My hunch
         | is AI wins and I'd love it if the defense was written by AI.
        
       | mrtksn wrote:
       | So what's the plan for the creatives whose work style becomes
       | reproducible by tech?
       | 
       | Sure, they also feed on each others work etc but in the core of
       | all these copyright, piracy, patent and similar discussions is
       | how these people are supposed to be compensated.
       | 
       | Working in the software company in the day and preaching open
       | source, anti copyright anti patents open access free for all in
       | the night works for the software people but people in the
       | creative industries are really struggling to get paid for their
       | work.
       | 
       | The genie isn't going back in the bottle, the tech will be able
       | to produce derivative work over the work of other people and I'm
       | not looking forward for the greater number of struggling artists.
        
         | madeofpalk wrote:
         | > Working in the software company in the day and preaching open
         | source, anti copyright anti patents open access free for all
         | 
         | "Open source" is copyright - it's not anti-copyright. It uses
         | copyright to grant a license to use under certain conditions,
         | and sometimes with obligations. You might keep it proprietary,
         | you might use GPL to require that the software stays open
         | virally, or you might use a more permissive BSD-style license.
         | The important part here is that as the creator, you choose how
         | you want your work to by copyrighted.
         | 
         | [0]: best quickest link i could find that contains the
         | "consent, credit, compensation"
         | https://mindmatters.ai/2023/01/three-artists-launch-lawsuit-...
        
         | ghaff wrote:
         | >So what's the plan for the creatives whose work style becomes
         | reproducible by tech?
         | 
         | You don't need tech (or at least computers).
         | 
         | To the degree that a portrait photographer, say, has a
         | distinctive lighting and posing style, that can absolutely be
         | copied. And there are _many_ examples in art of art techniques
         | that were widely copied.
        
           | mrtksn wrote:
           | The point is, how this person gets paid? Copying styles
           | happens all the time and its part of the trade, copying each
           | other is part of being human and its how we come up with new
           | things but the assumptions are that these people will get
           | paid for their work because copying their work style doesn't
           | scale well. A person with a particular style can get paid for
           | games artwork since it's not that easy to copy the style and
           | now suddenly they are not get paid but their work is simply
           | analysed by a machine and produced on demand.
           | 
           | It's like building your security on hard to brute force
           | secrets in tech and suddenly someone makes a machine that
           | instantly brute forces any secret. Its a similar kind of
           | disaster with the difference that human being can't just
           | switch doing something else and the value they added to the
           | society is not compensated.
        
       | eecc wrote:
       | Hadn't seen such a feeding frenzy since Napster and MP3. This is
       | a watershed moment
        
       | Zetobal wrote:
       | I bet that the training set would actually improve in quality
       | without Getty images...
        
         | ec109685 wrote:
         | Long term, these models will be fine without Getty's work.
         | Feels like a last gasp of company that will be disrupted.
        
       ___________________________________________________________________
       (page generated 2023-02-05 23:00 UTC)