[HN Gopher] An IP attorney's reading of the Stable Diffusion cla...
       ___________________________________________________________________
        
       An IP attorney's reading of the Stable Diffusion class action
       lawsuit
        
       Author : spiffage
       Score  : 84 points
       Date   : 2023-01-26 14:57 UTC (8 hours ago)
        
 (HTM) web link (katedowninglaw.com)
 (TXT) w3m dump (katedowninglaw.com)
        
       | Animats wrote:
       | That author makes the point that copyright registration (which
       | you do online with the Library of Congress in the US)[1] is
       | required for copyright enforcement litigation. And, quite
       | possibly, it may be required for DMCA enforcement.
       | 
       | Now, that could work out. Major movie studios and recording
       | companies do file copyright registrations and submit a deposit
       | copy. But few others bother. It seems that you can _send_ a DMCA
       | takedown request without a copyright registration, but you can 't
       | _enforce it in court_ without one.[2] This raises the question
       | of, if you as a service receive a DMCA takedown request, should
       | you ask the requestor to send proof of copyright registration,
       | and if they don 't, ignore the request?
       | 
       | [1] https://www.copyright.gov/registration/
       | 
       | [2] https://www.traverselegal.com/blog/is-a-registered-
       | copyright...
        
         | rebuilder wrote:
         | Is this requirement to register specifically a feature of the
         | DMCA? It seems quite surprising if, as the article claims,
         | "people who don't have registered copyrights cannot enforce
         | their copyrights in court."
         | 
         | That would mean that the vast majority of artwork posted online
         | is essentially free to exploit in the USA, since I'm sure most
         | people do not routinely register their works with the copyright
         | office before posting them.
        
         | mcbits wrote:
         | Unless they're legally obligated to show proof of copyright
         | registration for the takedown notice to be a valid, it would be
         | risky to assume they didn't register it just because they
         | didn't show proof.
        
           | Animats wrote:
           | Most of this revolves around the "safe harbor" provisions of
           | the DMCA. That is, doing a takedown without authenticating
           | the ownership of the copyright provides immunity against
           | being sued for contributory infringement. But to actually win
           | such a lawsuit, the purported copyright owner would have to
           | show proof of registration.
           | 
           | This suggests an online process which looks like this:
           | 
           | * US Service provider offers web page for DMCA notices.
           | 
           | * Web page requests that the user enter copyright
           | registration info.
           | 
           | * If user fails to provide registration info, web page offers
           | links to various national copyright registration sites to
           | register a copyright. A payment receipt for copyright
           | registration is acceptable as temporary proof of
           | registration, but must be followed up within some period of
           | time by actual proof of registration.
           | 
           | * Temporary proof of registration is enough for a takedown,
           | but the material will go back up if full proof is not
           | submitted later.
           | 
           | This would put a big dent in nuisance DMCA claims. The
           | service provider might get sued occasionally, but for big
           | providers, it's probably worth litigating this once or twice.
           | The companies that have valuable IP file copyright
           | registrations. Disney will be able to show a copyright
           | registration on all their movies.
        
       | dns_snek wrote:
       | > Stability AI has already announced that it is removing users'
       | ability to request images in a particular artist's style and
       | further, that future releases of Stable Diffusion will comply
       | with any artist's requests to remove their images from the
       | training dataset. With that removal, the most outrage-inducing
       | and troublesome output examples disappear from this case, leaving
       | a much more complex and muddled set of facts for the jury to wade
       | through.
       | 
       | How can this possibly be a valid good faith argument? Either
       | they're in breach of authors' copyright which extends to _every_
       | piece of art that they included in the dataset without
       | permission, or they 're in the clear and aren't obligated to
       | respond to removal requests.
       | 
       | This reads like damage control to me in an effort to temporarily
       | silence the loudest critics.
        
         | williamcotton wrote:
         | > This reads like damage control to me in an effort to
         | temporarily silence the loudest critics.
         | 
         | I think it is to avoid any common law wrongs related to the
         | publicity rights of the defendants. It seems like something
         | that a legal team would flag as an unnecessary risk for the
         | product. Removing their names and images from the training data
         | doesn't impact the usefulness of the model while at the same
         | time creating a much smaller surface area for collecting
         | subpoenas.
        
         | benlivengood wrote:
         | The LAION-5B dataset is metadata and URI pairs; all the images
         | are publicly accessible on the Internet.
         | 
         | Stable Diffusion's U-Net is trained to remove noise from images
         | in latent space, which the variational autoencoder (VAE)
         | converts to and from pixel space. CLIP embeddings are used to
         | improve the denoising step of the U-Net by using the
         | correlations between human language descriptions of the pixel
         | image to reduce latent noise. Neither the U-Net nor the VAR are
         | trained to interpolate or reproduce images from the training
         | set; if that happened the model would be overfitted and loss
         | would be terrible on the validation set. The VAE is trained to
         | produce a latent space that can accurately encode and decode
         | any pixel image, and the U-Net is trained to remove gaussian
         | noise from the latent space.
         | 
         | Stable Diffusion v2 16-bit is ~3GB of data. It was trained on
         | hundreds of millions of images (minimum of 170M in the 512x512
         | step alone). That leaves a maximum of ~20 bytes per image that
         | could conceivably be a copy, which is certainly not enough to
         | directly reproduce either the style or contents of any
         | individual image.
         | 
         | There is no artwork included in Stable Diffusion. There is a
         | semantic representation of how images are composed of varied
         | subjects represented in the latent space and what pixel
         | probabilities over those subjects relate to human language
         | phrases during decoding, and finally a method to remove noise
         | from the semantic representation, e.g. starting with a blank or
         | random canvas and interpreting what may be there, iteratively
         | guided by CLIP embeddings. If you give Stable Diffusion an
         | empty CLIP embedding you get a random human-interpretable image
         | obeying the distribution of the learned latent space.
        
           | pessimizer wrote:
           | > There is no artwork included in Stable Diffusion.
           | 
           | You might as well say that there's no artwork included in a
           | .jpg, just data that can be used to recreate a piece of
           | artwork using a carefully crafted interpreter.
        
             | astrange wrote:
             | Is there artwork included in libjpeg?
        
             | williamcotton wrote:
             | There's no artwork in the 1s and 0s. There's an artwork
             | when you render it to a screen.
             | 
             | It is not a copyright infringement I go to Disney's
             | website, download a JPEG, convert that JPEG to 1s and 0s,
             | print just a bunch of 1s and 0s and not the image and not
             | ascii art of the image, just like a printing press made up
             | of just [1] and [0] character blocks, and sell that. Yes,
             | the 1s and 0s are mathematically derived from the image but
             | the image of 1s and 0s is not a visual derivative of the
             | Disney image. That is, no one is going to buy a t-shirt of
             | 1s and 0s instead of a Mickey t-shirt. Anyone can go to the
             | Disney website and get those same 1s and 0s.
             | 
             | Again, anyone can go to the Disney website and get those
             | same 1s and 0s, so this is not at all about access. This is
             | about putting things on t-shirts and selling them.
        
           | N1ckFG wrote:
           | Afaik theoretically you could reproduce any image in the
           | training set using the full weights (not a fraction of them)
           | and the correct prompt. In practice since this is an
           | extremely lossy process, some or most of them aren't
           | reproducible. For this specific case, I suspect it'll come
           | down to whether somebody in the class can pass a test like
           | this: https://arxiv.org/abs/2212.03860
        
         | counttheforks wrote:
         | The models are already released. They can't retroactively
         | censor released models.
        
         | hehdhdhkf wrote:
         | Where is the form to remove my reddit comments from chat gpt
         | training data? Or my blog posts from gpt training data? I have
         | a paragraph on the Internet that someone read and got an idea -
         | I want my royalties.
         | 
         | These artists complaints are ridiculous, and are being made by
         | people who don't understand how things work.
         | 
         | If some other person draws a picture in their "style", no one
         | has to ask permission. That's not a thing.
         | 
         | They either don't understand how it works or they are just
         | upset that a computer can make art as good as (or better than)
         | they can in a fraction of the time.
         | 
         | All knowledge workers and creatives are going to face this in
         | the future. It's going to suck, but it would be great if we all
         | could try to understand reality first.
        
           | dns_snek wrote:
           | Do you want to live in a future where artists don't make
           | original art, musicians don't make music, book writers don't
           | write, and so on, all because AI companies can replicate 1000
           | different copies in their style or merely remix it for
           | marginally $0 cost, washed of all original copyright?
           | 
           | > All knowledge workers and creatives are going to face this
           | in the future. It's going to suck
           | 
           | This is not a given. It's up to us and the copyright law.
           | Real original work should be compensated appropriately unless
           | you're proposing that we accelerate deployment of universal
           | basic income and completely abolish copyright law.
           | 
           | I have a feeling you might not like the violent outcome if
           | you effectively strip original creators of their copyright,
           | give corporations the right to effectively generate infinite
           | profit off the backs of their work and tell the creators (and
           | other people whose jobs will be automated away) to pound sand
           | when they ask how they're supposed to pay rent from now on.
        
             | nickthegreek wrote:
             | Just an FYI, many artists create with no financial
             | incentive.
        
             | pixl97 wrote:
             | Do you want to live in a future where anything 'original'
             | an artist creates has now blocked anyone else on the planet
             | for the next 99 years and you must pay them royalties?
             | Because we already have Disney now and they suck quite a
             | bit.
             | 
             | I honestly want to live in a world where this 'worrying
             | about paying for rent' is not a problem that we're
             | concerned with, and a world with AI that can create and
             | make we far more apt to achieve that than with the status
             | quo we've been following so far.
        
               | dns_snek wrote:
               | > Do you want to live in a future where anything
               | 'original' an artist creates has now blocked anyone else
               | on the planet for the next 99 years and you must pay them
               | royalties?
               | 
               | No I don't want that but there's a very clear distinction
               | between people who are truly inspired by each others'
               | work and produce an average human output that maybe
               | covers their bills and AI that can pump out billions of
               | replicates per day to drown out all original work, with
               | entirety of that value captured by some corporation.
               | 
               | > I honestly want to live in a world where this 'worrying
               | about paying for rent' is not a problem that we're
               | concerned with, and a world with AI that can create and
               | make we far more apt to achieve that than with the status
               | quo we've been following so far.
               | 
               | Hey, me too. But those human issues should be addressed
               | first, not after automation is allowed to wipe out
               | people's livelihoods. I'm not anti-AI, I'm anti-big tech
               | that seeks to exploit billions of people's original work
               | for their own benefit.
        
             | hdjdnnhddddd wrote:
             | You mean the artists who use similar brushes in photoshop
             | and don't know how to paint and musicians who use logic,
             | auto tune, loop samples and don't know how to play an
             | instrument?
             | 
             | Copyright what? Someone's brain? You can copyright a
             | specific work or a character, but you actually want to live
             | in a world where someone can copyright the color red with a
             | dark black line, or the G# chord?
             | 
             | Real artists are going to art, and musicians are going to
             | make music. People who do creative work do it to express
             | themselves, their point of view or to say something.
             | 
             | Corporate art exist to sell you soda - I am not sure your
             | argument lands quite like you want it to.
             | 
             | I am glad this is going to court, because, with my
             | understand of how neural nets work, I fail to see how any
             | copyright is being infringed.
        
             | epistemer wrote:
             | This is the same stupid argument that Mp3 will destroy
             | music instead of embracing the new marketing opportunities
             | it represents.
             | 
             | IMO an artist that wants their name out of the dataset is a
             | moron. In the end , people copying an artist style over and
             | over will just send the price of originals through the
             | roof. This is completely obvious.
        
               | halfnormalform wrote:
               | Just like when Napster resulted in musicians becoming
               | super rich by selling their originals to people who found
               | their music for free? Those things don't happen in real
               | life.
        
               | km3r wrote:
               | The music industry is the biggest its ever been... so
               | yes?
        
             | CWuestefeld wrote:
             | > a future where artists don't make original art, musicians
             | don't make music, book writers don't write, and so on, all
             | because AI companies can replicate 1000 different copies in
             | their style
             | 
             | This argument is assuming its own conclusion, that such a
             | situation must be bad. But I don't think that's necessarily
             | true.
             | 
             | If somebody can make 1000 different derivatives _that the
             | public likes as much as the originals_ , then it must be
             | that in whatever criteria the public is interested in,
             | these works are just as good. If they were inferior, the
             | public wouldn't accept them as substitutes. The fact that
             | they are (hypothetically) accepted indicates that the
             | public is OK with them.
             | 
             | For my own personal aesthetics, I would like to think that
             | today's popular music, which is written by some combination
             | of algorithm and committee, and produced through tools that
             | correct the performance via autotune, quantization, etc.,
             | is inferior to the music that I enjoy. But given that the
             | public seems to like this music (and indeed, they like
             | music generated this way even though it's not even cheaper
             | for them to consume) seems to say that we as a society are
             | getting what they want, and who am I to put a normative
             | judgment on that?
        
               | haswell wrote:
               | This reduces the motivation to create art to a monetary
               | one, and the value people derive from art to a purely
               | aesthetic one.
               | 
               | In our future AI-infested world, I'll personally seek out
               | "certified non-AI" creators because part of what inspires
               | me is not just the content, but the creation of it.
               | 
               | Art is not just a destination.
        
               | CWuestefeld wrote:
               | I think when most people consume art, they're really
               | treating it as _entertainment_ regardless of its artistic
               | merit.
               | 
               | And as I mentioned above, I think the current music
               | industry is already there: the vast majority (by sales)
               | of music entertainment, produced by algorithm and by a
               | committee in order to drive sales. Despite this, in the
               | genres I care about, at least, the volume of high-
               | artistic-merit music (by variety) that is probably at its
               | highest point in decades, if not ever. To be sure, this
               | has meant that fewer artists are able to make a living
               | purely off their music. But this is a return to the norm:
               | the rise of the "star" in the late 20th century has been
               | an aberration.
               | 
               | On the other side of the coin, AI assistance will be (I
               | expect) a huge democratizing force. Recently we've seen
               | computational photography enabling people to take photos
               | of astounding quality with just their phones. And the
               | results of machine learning is allowing artists to make
               | huge improvements in post production as well.
               | 
               | I imagine that the stuff we've been seeing over the past
               | year, with ChatGPT, Stable Diffusion, and such
               | technologies, will be purposed towards (among other
               | things) tools that enable greater productivity for the
               | serious artist, and putting the means in the hands of
               | those who would be otherwise unable to get to table
               | stakes. I've started working on a short story myself,
               | using ChatGPT to help me work through some plot points.
               | 
               | So yeah, we'll get a lot of meritless dreck suitable only
               | for base entertainment. But we'll also see a
               | proliferation of art and of artists, as productivity
               | increases and new entrants are enabled.
        
             | thereisnospork wrote:
             | > Do you want to live in a future where artists don't make
             | original art, musicians don't make music, book writers
             | don't write, and so on, all because AI companies can
             | replicate 1000 different copies in their style or merely
             | remix it for marginally $0 cost, washed of all original
             | copyright?
             | 
             | Yes, much in the same way that I am glad I live in a future
             | where scribes aren't required to put text on paper: There
             | is a massive amount of efficiency to be gained and
             | enjoyment to reap for everyone who doesn't happen to be
             | employed as an artist.
        
               | dns_snek wrote:
               | I'm not against efficiency improvements, but the value
               | created by these improvements has to flow back towards
               | the society at large in one way or another. I'm not anti-
               | AI, I'm just arguing that artists and other creative
               | professionals should be compensated for their work before
               | their work is included in a for-profit ML model. That's
               | hardly radical.
               | 
               | Current proposals don't have any intention of addressing
               | that, they just silently kick the can down the road. What
               | happens when nearly everything is automated there are no
               | new profitable jobs that people can take on?
        
               | km3r wrote:
               | The comparison to scribes is a perfect analogy. The
               | 'scribing' of translating the idea of painting to an
               | actual painting is being made more efficient. The actual
               | creativity is what the original idea is, not the skill to
               | put it on paper.
        
               | dns_snek wrote:
               | None of that addresses anything I've said.
        
               | thereisnospork wrote:
               | It does though:
               | 
               | The value will flow to society via the cheap books (art).
               | Value will also flow to authors (artists)[0] due to the
               | facilitation of creation/distribution/replication. Where
               | that value will come from is from the scribes (painters,
               | sculptors, etc.)[1] whose contribution is rote.
               | 
               | [0]The intentional distinction here is that the value is
               | in the conception of art, not in the execution of it in
               | media.
               | 
               | [1]Imagine how much more productive Mozart or Beethoven
               | could have been had an AI-powered orchestra existed in
               | their time.
        
             | letmevoteplease wrote:
             | The technology is coming one way or another. You can stop
             | Stability AI but you can't stop OpenAI (Microsoft) or
             | Google, who can afford to license training data from
             | companies like Shutterstock. A restrictive interpretation
             | of copyright law will just keep it in the hands of the
             | biggest corporations.
        
             | williamcotton wrote:
             | > Do you want to live in a future where artists don't make
             | original art, musicians don't make music, book writers
             | don't write, and so on, all because AI companies can
             | replicate 1000 different copies in their style or merely
             | remix it for marginally $0 cost, washed of all original
             | copyright?
             | 
             | My creative output per minute has probably increased
             | threefold in the last few months from incorporating these
             | tools into my workflow. What I've been making doesn't look
             | or sound like anything that anyone else is making.
             | 
             | You're going to have a really hard time using Stable
             | Diffusion to make quirky cartoon daily desk calendars in
             | the style of plaintiff Sarah Anderson. You're going to have
             | a much better time if you think more like a Creative
             | Director and have less of an idea ahead of time of what the
             | tool is going to give you... so you can iterate, much like
             | a painter iterates while working.
             | 
             | These tools require the creative agency from the artist who
             | is using them in order to produce things that people find
             | interesting, entertaining, valuable or otherwise meaningful
             | so I really don't see a "corporation goes brrrrrr" doing
             | anything other than flooding the lowest-common denominator
             | content feed pipes on the internet contrasted with the
             | highest quality art using these tools in incredibly
             | transformative ways.
        
           | EamonnMR wrote:
           | > Where is the form to remove my reddit comments from chat
           | gpt training data? Or my blog posts from gpt training data?
           | 
           | More pointedly, how do I keep my GPL'd code from spewing,
           | license free, out of CodePilot?
        
             | tadfisher wrote:
             | I think that's the point of this blog post: it doesn't
             | matter if the inputs are copyrighted, it matters if the
             | output is infringing. It appears to be almost impossible to
             | directly recreate a source image with SD, but it seems
             | Copilot tends to produce a single input as its output,
             | verbatim. Copilot isn't doing "synthesis" as does SD, it's
             | acting more like a search engine.
        
         | epistemer wrote:
         | It seems highly foolish from a marketing perspective for the
         | artist too.
         | 
         | I don't see how more people copying the artist style would not
         | increase the value of originals.
         | 
         | A smart artist here should promote that their style is staying
         | in the dataset. It is as good free publicity as they will ever
         | get.
        
           | jonhohle wrote:
           | It depends on whether or not the "style" is associated with
           | the original artist or if the derivatives usurp the original.
           | 
           | e.g. Ask 10 random people who wrote the song "Hurt" and 9 out
           | of 10 will probably say Johnny Cash.
        
           | sct202 wrote:
           | It's not foolish at all. There's no value prop for an artist
           | to advertise their work in the model as 1 of 400+ million. SD
           | doesn't tell you anything about the art that inspired the
           | output, no one will ever know the artists work was ever used,
           | so this 'exposure' is as good as $0.
        
         | pessimizer wrote:
         | Sounds like an opt-out dark pattern. US law is unbelievably
         | aggressive when it comes to issues of copyright, and makes
         | _copyright itself_ opt-out, i.e. everything you produce is
         | copyrighted, and you have to license it in order to _remove_
         | that automatic copyright. But when it comes to building these
         | models to reproduce imitations of other people 's work,
         | suddenly copyright gets loosey-goosey.
         | 
         | Notice that it's a legal posture that implicitly condemns
         | Copilot, which ignores explicitly formulated opt-outs in the
         | form of licensing.
        
           | jacquesm wrote:
           | This is not just the US, this also applies to many, many
           | other countries.
        
       | acomjean wrote:
       | >"The output represents the model's understanding of what is
       | useful, aesthetic, pleasing, etc. and that, together with data
       | filtering and cleaning that general image generating AI companies
       | do,2 is what the companies consider most valuable, not the
       | training data.3"
       | 
       | This didn't make any sense to me. Without the curated training
       | data (images) how are they making the models?
       | 
       | No matter what, putting images into your machine then selling the
       | output generated with them and not compensating the original
       | creators is going to be seen as problematic. Machines aren't
       | people.
        
         | wvenable wrote:
         | > Machines aren't people.
         | 
         | There's no reason why that is the significant detail. Why does
         | it matter? If you can look at millions of images over your
         | lifetime and faithfully reproduce famous works of art by hand,
         | aren't you just as wrong?
        
           | shagie wrote:
           | Machines can't create copyrighted works.
           | 
           | Setting aside the question of "is the model a derivative
           | work", running the program _cannot_ create a work that is
           | copyrighted. Only humans (and not monkeys) can hold a
           | copyright.
           | 
           | And thus, the questions are: "is generating a model based on
           | the data set a derivate work" and the unasked question "is
           | asking the model to generate a work in the style of {artist}
           | a derivative work _by the person asking the model_? "
        
             | wvenable wrote:
             | > Running the program cannot create a work that is
             | copyrighted
             | 
             | If think you're going to need more clarity on what you mean
             | by that. Programs are used to create copyrighted works all
             | the time. And machines can and do create copies of other
             | people's copyrighted works.
             | 
             | > And thus, the questions are: "is generating a model based
             | on the data set a derivate work" and the unasked question
             | "is asking the model to generate a work in the style of
             | {artist} a derivative work by the person asking the model?"
             | 
             | My point is you can take the machine or model out of the
             | question entirely. If you learn stuff and then produce
             | something new with what you learned, is that a derivative
             | work? That's already a complex question but it has nothing
             | to do with how you learned it. It depends entirely on the
             | output and has little to do with the input.
        
               | shagie wrote:
               | https://www.theverge.com/2022/2/21/22944335/us-copyright-
               | off...
               | 
               | > The US Copyright Office has rejected a request to let
               | an AI copyright a work of art. Last week, a three-person
               | board reviewed a 2019 ruling against Steven Thaler, who
               | tried to copyright a picture on behalf of an algorithm he
               | dubbed Creativity Machine. The board found that Thaler's
               | AI-created image didn't include an element of "human
               | authorship" -- a necessary standard, it said, for
               | protection.
               | 
               | https://www.wipo.int/wipo_magazine/en/2017/05/article_000
               | 3.h...
               | 
               | > Creating works using artificial intelligence could have
               | very important implications for copyright law.
               | Traditionally, the ownership of copyright in computer-
               | generated works was not in question because the program
               | was merely a tool that supported the creative process,
               | very much like a pen and paper. Creative works qualify
               | for copyright protection if they are original, with most
               | definitions of originality requiring a human author. Most
               | jurisdictions, including Spain and Germany, state that
               | only works created by a human can be protected by
               | copyright.
               | 
               | https://www.copyright.gov/comp3/chap300/ch300-copyrightab
               | le-...
               | 
               | > 306 The Human Authorship Requirement
               | 
               | > The U.S. Copyright Office will register an original
               | work of authorship, provided that the work was created by
               | a human being.
               | 
               | > The copyright law only protects "the fruits of
               | intellectual labor" that "are founded in the creative
               | powers of the mind." Trade-Mark Cases, 100 U.S. 82, 94
               | (1879). Because copyright law is limited to "original
               | intellectual conceptions of the author," the Office will
               | refuse to register a claim if it determines that a human
               | being did not create the work. Burrow-Giles Lithographic
               | Co. v. Sarony, 111 U.S. 53, 58 (1884). For representative
               | examples of works that do not satisfy this requirement,
               | see Section 313.2 below.
               | 
               | > 313.3 Works That Lack Human Authorship
               | 
               | > As discussed in Section 306, the Copyright Act protects
               | "original works of authorship." 17 U.S.C. SS 102(a)
               | (emphasis added). To qualify as a work of "authorship" a
               | work must be created by a human being. See Burrow-Giles
               | Lithographic Co., 111 U.S. at 58. Works that do not
               | satisfy this requirement are not copyrightable.
               | 
               | > The U.S. Copyright Office will not register works
               | produced by nature, animals, or plants. Likewise, the
               | Office cannot register a work purportedly created by
               | divine or supernatural beings, although the Office may
               | register a work where the application or the deposit
               | copy(ies) state that the work was inspired by a divine
               | spirit.
               | 
               | > ...
               | 
               | > Similarly, the Office will not register works produced
               | by a machine or mere mechanical process that operates
               | randomly or automatically without any creative input or
               | intervention from a human author. The crucial question is
               | "whether the 'work' is basically one of human authorship,
               | with the computer [or other device] merely being an
               | assisting instrument, or whether the traditional elements
               | of authorship in the work (literary, artistic, or musical
               | expression or elements of selection, arrangement, etc.)
               | were actually conceived and executed not by man but by a
               | machine."
               | 
               | ----
               | 
               | A _machine_ cannot create a copyrighted work. The human
               | who uses it can - and it is the human who uses the ML
               | model to create a derivative work - not the ML model
               | itself.
               | 
               | Thus any derivative work infringement from using Stable
               | Diffusion is from the human creating the prompt doing it.
               | This doesn't attempt to answer the "is the model itself a
               | derivative work".
        
               | wvenable wrote:
               | I guess your point was so obvious that I missed it (and
               | that's not to belittle it -- it's a good point). Machines
               | are both created and operated by humans for human
               | purposes. Humans are using a machine, created by humans,
               | to create art for human consumption.
               | 
               | Unoperated machines are not spontaneously creating art of
               | their own motivation.
        
           | EamonnMR wrote:
           | Humans can be trusted not to do that thing, and get in
           | trouble if they get caught.
        
             | wvenable wrote:
             | Is it not humans giving the AI prompts to do these things?
             | Why does the machine in the middle matter?
        
         | cma wrote:
         | > No matter what, putting images into your machine then selling
         | the output generated with them and not compensating the
         | original creators is going to be seen as problematic. Machines
         | aren't people.
         | 
         | What about a company where you submit images and it tells you
         | which faces are in them?
        
         | seydor wrote:
         | They are never going to compensate the artists. It's cheaper to
         | hire 1000 designers to make 100000 images of artistic styles
         | they are going for
        
           | krisoft wrote:
           | uhm. What you are proposing is compensating the artists.
           | Those 1000 designers are the artist in question then.
           | 
           | > It's cheaper to hire 1000 designers to make 100000 images
           | of artistic styles they are going for
           | 
           | I bet that you are massively underestimating the cost of
           | that.
        
           | gugagore wrote:
           | I think you underestimate the scale of data these models are
           | trained on by many orders of magnitude.
        
             | Ajedi32 wrote:
             | I suspect you might be able to get pretty good results by
             | training the system on video/CGI/other images that can be
             | easily mass produced, then fine-tuning on a much smaller
             | number of drawings and other stylized images.
        
       | quitit wrote:
       | Interestingly it shows the tenuous nature of the plaintiffs case,
       | even before getting into the plaintiff's large errors.
       | 
       | Since reasonably simplified information about SD is available
       | and/or the plaintiff could have involved an expert to review his
       | claims - it does raise a question if the function of the lawsuit
       | is more about rattling chains rather than the merits of their
       | argument. I.E. A deliberate ploy to extract a settlement.
        
         | jerf wrote:
         | Ultimately, this is just something that has to be solved with
         | legislation, not a court case. It's too novel a setup for a
         | court case to deal with under existing frameworks.
         | 
         | I think one issue is just that of scale. I personally tend to
         | agree that there's something icky with just slurping up
         | literally everyone's content, then producing a tool that will
         | then proceed to put them out of business _en masse_. But
         | proving that illegal under current law is certainly going to be
         | a challenge.
         | 
         | I have not read the original complaint but it surprises me that
         | the lawsuit doesn't have a much stronger focus on this aspect.
         | Copyright law is very concerned about not destroying the market
         | for a given work through infringement, but this is a case about
         | destroying the market for entire artists at a stroke.
         | 
         | But that's a hard argument in court. There's no legal basis for
         | claiming damages because the entire market _itself_ is being
         | destroyed.
         | 
         | Though I'm not sure it's any weaker than the other claims
         | trying to be made. The basic problem is, this isn't illegal in
         | any sense. I don't just mean "illegal in that it must be
         | banned" but _any_ level of gradation in between, in the
         | licenses, in requiring compensation, in any sort of regulation
         | whatsoever. Technology has simply outrun law again.
        
           | sebzim4500 wrote:
           | Even in the strictest possible strengthening of IP law, where
           | you need the artists written permission before feeding their
           | data into a neural network, I think the market for artists is
           | doomed.
           | 
           | Disney can train a model from every frame of their video
           | library as well as whatever they can find which is
           | unambiguosly public domain. Then they could hire a few
           | hundred artists to draw whatever the model is bad at by the
           | end of this process for finetuning.
        
           | simiones wrote:
           | I think it's not going to be that hard to argue that the
           | company is infringing the copyright of those whose images
           | they are using. Especially once the judge is show how similar
           | the output of SD can be to a particular artist's images with
           | the right prompts (proving that SD has memorized a
           | significant amount of those images).
        
             | cwkoss wrote:
             | Some artists images just don't contain much entropy though?
             | 
             | If an AI art engine outputs a frame of solid blue, is it
             | infringing the copyright of Yves Klein's solid blue "IKB
             | 79"?
             | 
             | I think that some artists' styles can be accurately
             | replicated without training on any of their work: because
             | the artists' style is generic enough that it can be
             | exhaustively encoded via the works of others.
             | 
             | This seems like a bad test because generic barely-creative
             | works are much more easily generated by AI engines
             | regardless of the source training data. I wonder if we're
             | going to see IP troll style behavior from artists drawing
             | many obvious things so they'll have standing to sue (and
             | negotiate a 'fuck off' settlement) with AI art engines.
        
               | ROTMetro wrote:
               | This example has already been countered multiple times.
               | You can not copyright a solid blue painting, you can only
               | copyright an 'installation' of such. People in these
               | discussions act as if copyright only came into being and
               | hasn't been legislated/litigated for quite a while now.
               | But then, these people can't even be bothered to look at
               | copyright requirements that there be at the least an
               | 'anonymous artist' that created the work to be
               | copyrighted (and hint, a prompt writer does not count as
               | such, anymore than a machinist inputting CAD requirements
               | suddenly owns copyright to each factory part their
               | machine spits out).
        
       | cycomanic wrote:
       | I don't really understand the argument about danger mouses grey
       | album being different from just a random "mash up" because the
       | artistic merit behind the grey album. Sure the grey album is
       | likely much more pleasant to listen to and would likely be
       | considered worthy of copyright itself, where a random mash might
       | not be. That doesn't change the fact that danger mouse had to ask
       | permission to use Jay Zs and the Beatles work (and likely had to
       | pay), or otherwise would have violated copyright. So how is that
       | argument relevant. Nobody is arguing that composing images via
       | stable diffusion prompts (like making some collage) is not a
       | creative process. The argument is does one have to have
       | permission/licence of the original creators.
        
       | sebstefan wrote:
       | > Stability AI has already announced that it is removing users'
       | ability to request images in a particular artist's style
       | 
       | I hope it returns when they win and get rid of this legal
       | bullying.
        
         | jeroenhd wrote:
         | An artist's style is not copyrightable so I doubt it makes much
         | of a difference. My guess is that showing good faith will make
         | the lawsuit go over easier, because there's nothing illegal
         | about paying someone to copy someone else's style (and not just
         | a replica).
        
           | threeseed wrote:
           | But the artist's work itself is copyrightable.
           | 
           |  _Any_ use of that work without permission (and thus
           | attribution /compensation) is the problem.
        
             | tadfisher wrote:
             | Fair use is a thing.
        
         | sebzim4500 wrote:
         | I think this restriction was more about trying to shut up some
         | very vocal people on social media and less about the law.
         | 
         | Copying an artists style is legal in every jurisdiction in
         | which Stability operates.
        
         | ben_w wrote:
         | I don't.
         | 
         | Information comes with many different rights: _copy_ -right is
         | the right to make copies; "moral rights" were mentioned in a
         | few of my UK job contracts and that's "the right to be
         | identified as the author of a work"; database rights are for
         | collections of statements of fact that are not eligible for
         | copyright but which were deemed to be worth protecting anyway
         | for much the same reasons.
         | 
         | Even if copyright is totally eliminated from law by the mere
         | existence of these AI[0], we may well retain the aforementioned
         | "moral rights". And even if it is totally legal, there's also a
         | strong possibility of it being considered gauche to use an AI
         | trained on the works of those that don't like this.
         | 
         | [0] https://kitsunesoftware.wordpress.com/2022/10/09/an-end-
         | to-c...
        
           | blibble wrote:
           | it's quite common in UK contracts of employment to try and
           | transfer the moral rights to the Employer
           | 
           | but this isn't enforceable, they cannot be transferred
        
           | hnfong wrote:
           | Moral rights is indeed "the right to be identified as the
           | author of a work".
           | 
           | I don't think it means the author has a right to all similar
           | styles. If I can legally ask somebody to paint me something
           | in the style of a famous (living) artist, that person
           | presumably having seen and studied their famous works for a
           | while, why should I not be able to ask the AI to do the same
           | thing?
           | 
           | (I understand there might be people who think even a human
           | person emulating the style of another artist is morally
           | wrong, but at least that's a consistent argument)
        
             | ben_w wrote:
             | Moral arguments aren't very consistent, in my experience --
             | when I first heard someone suggest that people start with
             | feelings and _then_ create a narrative (not necessarily
             | coherent) to fit those feelings, a lot of the weird things
             | started to make a lot more sense.
             | 
             | I'm told that for people who think Rolex watches are
             | important, a _fake_ Rolex is much worse than not having a
             | Rolex at all: the item is a signal much like a peacock
             | tail, the signal is de-valued by making things easy.
        
         | anigbrowl wrote:
         | There's a big difference between 'rework my drawing to look
         | like it was painted by Goya' and 'render this drawing in the
         | style of Lisa Frank' or any living visual artist famed for a
         | specific identifiable style as opposed to a particular image.
         | 
         | Comics are one example of an area where individual artists
         | might develop a large body of work in a very distinctive style.
         | You probably know what a Tintin comic (by Belgian artist Herge)
         | looks like. And lots of Manga artists have very specific and
         | instantly identifiable styles. Individual artistry is a little
         | less obvious with popular western comics because the best-known
         | titles tend to be superhero franchises where the
         | characters/story world are owned by a corporation and
         | individual artists come and go.
        
           | nickthegreek wrote:
           | I thought many comics work with a team of artists to create
           | their work. Largely by having 1 person create the style and
           | other's learning to mimic it. Let alone having different
           | people do the line work, shading, coloring, etc.
        
             | anigbrowl wrote:
             | Right....which involves a fair amount of productive and
             | administrative labor to assemble and operate. If I can take
             | a collection of scans and feed them into my diffusion
             | appliance and have it pop out and endless supply of new
             | panels 24-7, the economics of the handcrafted original
             | artwork factory start to look really bad.
             | 
             | Upside: the brilliant artist Rohan Kishibe falls under a
             | bus, but while his loss is tragic his artistic legacy lives
             | on - hooray!
             | 
             | Downside: up-and-coming artist Rohan Kishibe is on the
             | verge of breaking through commercially, but the publishers
             | of Shonen Lump, who invested heavily in AI, floods the
             | market with work from Kohan Rishibe, and our hero is
             | derided as a mere imitator.
        
           | mcbuilder wrote:
           | I don't even think it extends to this, it's simply because
           | it's automated. I have no talent in the area, but I know that
           | artists can copy one another's styles. You see it in talented
           | art student master copies, and hell I'd bet most professional
           | cartoonists could draw a page what looks pretty damn close to
           | a series of Tintin panels when you squint.
        
             | anigbrowl wrote:
             | Well yes, automation makes a massive difference because
             | with a machine you can crank hundreds of panels in a
             | particular style in the time it takes a human artist to do
             | a single one.
             | 
             | In fact, I generated a bunch of Tintin panels between
             | writing my earlier comment and this one, and they're bad
             | but not terrible - mainly because I asked for 'Tintin
             | riding a bicycle [...]' and it's having trouble with things
             | like the bicycle spokes. Two out of the 4 'feel' right in
             | terms of the line drawing style, color palette, foreground-
             | background composition, level of background detail etc.
        
       | kmeisthax wrote:
       | >The complaint includes a section attempting to explain how
       | Stable Diffusion works. It argues that the Stable Diffusion model
       | is basically just a giant archive of compressed images (similar
       | to MP3 compression, for example) and that when Stable Diffusion
       | is given a text prompt, it "interpolates" or combines the images
       | in its archives to provide its output. The complaint literally
       | calls Stable Diffusion nothing more than a "collage tool"
       | throughout the document. It suggests that the output is just a
       | mash-up of the training data.
       | 
       | I've seen the collage tool argument several times, and I don't
       | agree with it. But I can understand _why_ people believe it.
       | 
       | You see, there's a _very large_ number of people who use AI art
       | generators as a tracing tool. Like, to the point where someone
       | who has never touched one might believe that it literally just
       | photobashes existing images together.
       | 
       | The reality is that there's three ways to use art generators:
       | 
       | - You can tell it to generate an image with a non-copyright-
       | infringing prompt. i.e. "a dog police officer holding a gun"
       | 
       | - You can ask it to replicate an existing style, by adding
       | keywords like "in the style of <existing artist>"
       | 
       | - You can modify an existing image. This is in lieu of the
       | _random seed image_ that is normally provided to the AI.
       | 
       | That last one is confusing, because it makes people think that
       | the AI itself is infringing when it's only the person using it.
       | But I could see the courts deciding that letting someone chuck an
       | image into the model gives you liability, especially with all of
       | the "you have full commercial rights to everything you generate"
       | messaging people keep slapping onto these.
       | 
       | Style prompting is one of those things that's also legally
       | questionable, though for different reasons. As about 40,000 AI
       | art generator users have shouted at me over the past year, you
       | cannot copyright a style. But at the same time, producing "new"
       | art that's substantially similar to copyrighted art is still
       | illegal. So, say, "a man on a motorcycle in the style of Banksy"
       | might be OK, but "girl holding a balloon in the style of Banksy"
       | might not be. The latter is basically asking the AI to
       | regurgitate an existing image, or trace over something it's
       | already seen.
       | 
       | I think a better argument would be that, by training the AI to
       | understand style prompts, Stability AI is inducing users to
       | infringe upon other people's copyright.
        
       | anigbrowl wrote:
       | Great write-up. SD's removing the ability to imitate styles will
       | probably go a long way to quell objections, though it will be
       | interesting to see if there's a future legal split over the
       | styles of living and dead artists. I don't imagine that anyone
       | would object to 'autoseurat' for example.
       | 
       | I can see see a future dispute arising over outpainting
       | (beginning with an existing copyrighted work) but there
       | infringement and identity of the infringer (the user, not the
       | toolmaker) is more clear.
        
       | philipwhiuk wrote:
       | It's interesting the IP attorney cites The Grey Album as being an
       | example of something that is legal, when the reality is that the
       | case was never brought because the original artists wishes meant
       | it was unattractive for EMI to pursue the case.
        
       | consumer451 wrote:
       | A lawyer who works on YouTube channel Corridor Crew posted a
       | decent breakdown on this lawsuit recently as well:
       | 
       | https://news.ycombinator.com/item?id=34479857
        
       | shanebellone wrote:
       | I've been saying this since it came out...
       | 
       | Stable Diffusion is equivalent to hip-hop sampling in the 80s and
       | 90s. The outcome is obvious.
        
         | haswell wrote:
         | I've heard this argument on numerous occasions but I have never
         | heard someone justify it or why they believe it.
         | 
         | Are there specific similarities that make you believe these are
         | equivalent scenarios? Not just "it feels thematically similar".
        
           | shanebellone wrote:
           | It's the closest thing to a precedent.
           | 
           | Hip-hop originally recorded and transformed vocals,
           | instruments, and beats to create something new from pieces of
           | something old. The practice occurred without permission and
           | obviously ended up in court. Now sampling requires a
           | licensing agreement. The additional cost has fundamentally
           | changed the genre (over the last 40 years).
           | 
           | Hip-hop and tech both ignored IP rights because neither
           | started with a legal framework and both would have found the
           | additional cost prohibitive.
        
             | haswell wrote:
             | That's helpful. I wasn't aware of the eventual licensing
             | enforcement.
             | 
             | If I'm understanding you correctly, you see the similarity
             | more in how the initial side-stepping of copyright
             | eventually gave way to new licensing rules (or adherence to
             | existing rules).
             | 
             | I've heard similar sentiment trying to make another point
             | entirely - something closer to arguing that the AI is
             | creatively inspired the way humans are, and therefore is by
             | definition not infringing.
             | 
             | I suspect this might be where the flurry of downvotes came
             | from.
        
               | shanebellone wrote:
               | "you see the similarity more in how the initial side-
               | stepping of copyright eventually gave way to new
               | licensing rules"
               | 
               | The systems are similar too despite having completely
               | different internal processes. Both transformed existing
               | IP without permission, to produce sufficiently remixed
               | art as an output. A sufficiently generic abstraction
               | would look very similar despite the disparity of domain.
               | 
               | The primary difference between hip-hop and Stable
               | Diffusion is that AI cannot rationalize, explain, or
               | attribute inspiration to a final product.
               | 
               | There was no aha moment and thus no creativity. It has no
               | vested interest in its work.
        
         | sebzim4500 wrote:
         | I think there is a pretty big difference though. In the case of
         | sampling you can play the original against the new media and
         | show that they are 'the same'. I have no idea how you would go
         | about doing something similar for Stable Diffusion. "Those
         | three pixels look different when you remove image X from the
         | training set" is probably not a convincing argument to anyone.
        
       | layer8 wrote:
       | > future releases of Stable Diffusion will comply with any
       | artist's requests to remove their images from the training
       | dataset.
       | 
       | How does this work? Do they retrain the model from scratch every
       | week? Or is it somehow possible to retroactively remove specific
       | training-set items from the already-trained model?
        
       | mensetmanusman wrote:
       | "LLMs are illegal because anything they see is owned by other
       | people"
       | 
       | The Disney protection act rears its head...
        
       | scotty79 wrote:
       | > Stability AI has already announced that it is removing users'
       | ability to request images in a particular artist's style and
       | further, that future releases of Stable Diffusion will comply
       | with any artist's requests to remove their images from the
       | training dataset.
       | 
       | This is incredibly disheartening. Who knows how long will it take
       | to progress the tech to the point where anyone will be able to
       | train and run models unrestricted without dealing with lawyer
       | nonsense.
        
         | RandomLensman wrote:
         | Rent seeking by owners of AI machines is OK, but not by
         | copyright owners?
        
           | Kiro wrote:
           | Yes. Abolish all copyright. Are we hackers or not?
        
             | manigandham wrote:
             | Who is "we"?
        
               | Kiro wrote:
               | Participants on a site called Hacker News.
        
             | TylerE wrote:
             | You understand that completely kills OSS as a concept,
             | right?
        
               | krapp wrote:
               | Free and open source software licenses are redundant in a
               | world where copyright and intellectual property laws
               | don't exist, and no form of media (including software)
               | can legally be owned by anyone.
        
               | kmeisthax wrote:
               | Casual reminder that the endgame of the GPL was to make
               | software copyright unenforceable.
        
               | [deleted]
        
               | matheusmoreira wrote:
               | So what? Free software was literally created in reaction
               | to copyright protections getting extended to software.
               | They make no sense in a world without copyright.
               | 
               | By the way, it would also kill proprietary software as a
               | concept. Source code leak? It's no longer a crime to use
               | it. We'd never have to read licensing nonsense ever
               | again.
        
               | TylerE wrote:
               | It would kill research. Why pay for R&D when a gazillion
               | other companies will instantly clone it?
        
               | scotty79 wrote:
               | And yet the list of the most important inventions there
               | are no inventions created by companies. All tax funded.
               | 
               | Corporations R&D is great at one thing, making things
               | cheaper to produce and thus more widely available. And
               | that's wonderful. But actually we want corporations to
               | steal that tech from each other because then the consumer
               | benefits the most.
        
               | scotty79 wrote:
               | Why would that be? Huge amount of OSS is released under
               | fully permissive licenses.
        
               | jeroenhd wrote:
               | Permissive licenses like "if you use this code you must
               | also make your code available under the same license"
               | form the basis of the world's most often used open source
               | software.
               | 
               | Open licenses are not the same as abolishing copyright.
        
               | scotty79 wrote:
               | Those are not permissive licenses. Those are copyleft
               | licenses. As I said, a lot of software uses permissive
               | ones.
               | 
               | https://blog.ipleaders.in/permissive-license-copyleft-
               | possib...
        
               | rpdillon wrote:
               | Those licenses only carry weight because of copyright.
        
               | scotty79 wrote:
               | I was talking about permissive licenses. They have very
               | few conditions:
               | 
               | https://blog.ipleaders.in/permissive-license-copyleft-
               | possib...
               | 
               | Their weight is irrelevant. They would carry pretty much
               | as much meaning in the complete absence of copyright.
               | 
               | In the world of sensible defaults they wouldn't need to
               | exist at all.
        
             | matheusmoreira wrote:
             | You have my respect.
        
             | t433 wrote:
             | Spoiler alert: They're not!
             | 
             | This is really _Venture Capital News_ , and accordingly
             | they've appropriated the whole "hacker" image in an attempt
             | at authenticity.
        
               | kragen wrote:
               | pg did originally call it 'startup news' but i don't
               | think you have standing to accuse him of 'appropriating'
               | the term 'hacker' until you've hacked together something
               | of comparable significance to _on lisp_ or viaweb
        
           | adamsmith143 wrote:
           | I think creating art in the style of an artist is well
           | covered by Fair Use.
        
             | acomjean wrote:
             | In music it sometimes isn't.
             | 
             | https://en.wikipedia.org/wiki/Midler_v._Ford_Motor_Co.
        
               | TylerE wrote:
               | Even more famously:
               | https://en.wikipedia.org/wiki/Fogerty_v._Fantasy,_Inc.
        
             | RandomLensman wrote:
             | Even if it were (which I am not competent to speak on),
             | should that help to enrich some large corporation, for
             | example?
        
             | marginalia_nu wrote:
             | Make a mouse cartoon in the style of Disney and tell me how
             | well that goes down.
        
               | matheusmoreira wrote:
               | Here you go.
               | 
               | https://en.wikipedia.org/wiki/Cuphead
               | 
               | > The game's creators, brothers Chad and Jared
               | Moldenhauer, took inspiration from the rubber hose style
               | of the golden age of American animation and the
               | surrealist qualities of works of Walt Disney Animation
               | Studios, Fleischer Studios, Warner Bros. Cartoons, MGM
               | Cartoon Studio and Walter Lantz Productions.
        
               | marginalia_nu wrote:
               | There is a conspicuous lack of mice in that cartoon.
        
               | yokem55 wrote:
               | That's because you run into trademark laws, not
               | copyright. Not to mention, if you can make the case that
               | your art of the mouse is Parody, then it falls squarely
               | under fair use.
        
         | haswell wrote:
         | > _Who knows how long will it take to progress the tech to the
         | point where anyone will be able to train and run models
         | unrestricted without dealing with lawyer nonsense._
         | 
         | These are orthogonal issues at this point.
         | 
         | The one concern I do have is that the "lawyer nonsense" (read:
         | AI companies playing fast and loose with current laws) will
         | stack the regulatory deck against AI technology unnecessarily -
         | essentially because of an unforced error that brings negative
         | attention to the technology.
         | 
         | Put another way, these companies are asking to have a spotlight
         | put on them by being so flippant about copyright and ethics
         | issues. This spotlight could have been avoided with better
         | behavior, and the tech would still appear magical and remain
         | one of the most impactful jumps in tech in decades.
        
           | ok123456 wrote:
           | It's not 'playing fast and loose'.
           | 
           | It's an area where there are no existing laws. We're not
           | going to stop AI because some furry deviant art artist
           | complains loudly online.
        
           | matheusmoreira wrote:
           | There should be no "AI companies" in the first place. This
           | stuff should be running on our own computers. That way they
           | cannot set any stupid limits on it.
        
             | haswell wrote:
             | For sake of argument, let's assume that the six-figure
             | (seven, maybe?) price tag on the hardware was no longer a
             | factor and it was possible to train the models locally, I
             | think the sources of the content everyone is trying to
             | train their local model against would quickly shut down the
             | inundation of traffic they're receiving from the hundreds
             | of thousands of individual computers all trying to build
             | their own "unlimited" model.
             | 
             | The computing requirements enforce the current reality that
             | the training of models will be centralized.
             | 
             | This places a larger ethical burden on those central
             | entities, IMO.
        
               | matheusmoreira wrote:
               | Decentralize it somehow. People contribute computing
               | power to projects like folding@home. Why not do the same
               | thing for AI? A distributed, decentralized, censorship
               | resistant AI model anyone can contribute to would be
               | world changing.
        
         | GaryNumanVevo wrote:
         | Strongly disagree, IP law (despite it's misuse by a certain
         | mouse mascot'd company) is extremely important and protecting
         | artists work and their livelihood.
         | 
         | The price floor on art commissions is already very low and AI
         | effectively makes that cost zero, while providing zero
         | compensation to the thousands of artists. Without their work,
         | there's no Stability AI. From an ethical standpoint Stability
         | is in the wrong, and from a legal one I think the class has a
         | very strong case to recover damages.
        
           | astrange wrote:
           | StableDiffusion is not based on art commissions. You can
           | search https://rom1504.github.io/clip-retrieval/ and see what
           | kind of nonsense it usually has trained on.
        
             | GaryNumanVevo wrote:
             | It most definitely is, LAION-5B contains a large amount of
             | copyrighted works from DeviantArt, ArtStation, etc.
        
               | astrange wrote:
               | Are those all commissions?
               | 
               | The aesthetic subset is .05% Artstation:
               | 
               | https://laion-aesthetic.datasette.io/laion-
               | aesthetic-6pls/im...
               | 
               | Not sure if that's a large amount or not. They could've
               | used robots.txt if they didn't want to be indexed.
        
               | GaryNumanVevo wrote:
               | > Note that this is only a small subset of the total
               | training data: about 2% of the 600 million images used to
               | train the most recent three checkpoints, and only 0.5% of
               | the 2.3 billion images that it was first trained on. [1]
               | 
               | That dataset only covers "aesthetic" clip terms as well.
               | Not to mention a lot of images come from Pinterest and
               | other aggregators.
               | 
               | [1] https://waxy.org/2022/08/exploring-12-million-of-the-
               | images-...
        
         | healsdata wrote:
         | I'm not sure I understand the point you're making. Its
         | disheartening that artists can opt-out of having a computer
         | algorithm make derivative versions of their creations?
         | 
         | I'm probably on the opposite side of the fence. I do find it
         | disheartening that it's opt-out instead of opt-in. The training
         | set should be limited to public domain and CC-0 until such a
         | time it can comply with attribution; then other CC works could
         | be incorporated.
        
           | scotty79 wrote:
           | It's disheartening because it's a great loss to everybody.
           | Almost none of the people that were generating images in a
           | style of some artist will contact this artist and pay to have
           | an image created.
           | 
           | So many artists styles could have gone viral and actually
           | bring those artists some work from the people who tried the
           | AI commercially and got results that weren't completely
           | satisfactory. Now barely anyone will ever have any contact
           | with their art (relatively speaking vs scenario of virality).
           | 
           | Basically the only people who win are the lawyers and handful
           | of artists that were mislead by lawyers primitive
           | argumentation. Everybody else looses. First and foremost
           | artists and art lovers but also AI researchers and hardware
           | manufacturers.
        
             | likeclockwork wrote:
             | People keep trying to compensate artists with exposure.
        
             | Avicebron wrote:
             | They could have gone viral because an AI recreated their
             | art style to the degree that they could? And therefore
             | people would abandon the AI to pay the artist individually?
             | I'm not sure I follow, wouldn't those people just us the AI
             | to continue creating what they want unless you think that
             | driving the artist to compete monetarily with an AI will
             | somehow work in the artist's favor?
        
               | scotty79 wrote:
               | > They could have gone viral because an AI recreated
               | their art style to the degree that they could?
               | 
               | I didn't know of any artist (except for long dead ones)
               | that I saw the names of in prompts.
               | 
               | > And therefore people would abandon the AI to pay the
               | artist individually?
               | 
               | If you have particular commercial needs you discovered
               | through the use of AI you might want to go to the source,
               | not for everything, but for some things. It might be
               | easier to explain some of the stuff you need to a live
               | human than to AI. AI images are still very imperfect and
               | prompts are not easy to create.
        
               | l33tman wrote:
               | I had absolute zero interest in digital art before, now I
               | have some and would at least react to the artists often
               | mentioned in prompts. They didn't get any money from me
               | before, but the chance is > 0.0 now at least, I might at
               | some point commission some original digital art and now I
               | would checkout both the AI and the humans. Before this I
               | might instead have opted for a stock photo for $5.
               | 
               | But who knows how the numbers turn out in the long run. I
               | don't think the artists is the group that's going to get
               | the most annoyed in the next couple of years - copy
               | writers and programmers, 3D artists, basically anybody
               | who's doing grunt work..
               | 
               | I think the main problem with all of this is that it has
               | entered our lives so fast, compared to other revolutions.
        
             | HelloMcFly wrote:
             | > So many artists styles could have gone viral and actually
             | bring those artists some work from the people who tried the
             | AI commercially and got results that weren't completely
             | satisfactory.
             | 
             | I think it's quite presumptuous to unequivocally state that
             | artists lose with this. Is it a complicated situation? Yes,
             | of course, too complicated for such certainty. That's a
             | wonderful thing to believe, but it's just as plausible (I'd
             | argue far, far more plausible as the tech improves) that
             | clients who would formerly pay for their work no longer
             | have to.
        
               | scotty79 wrote:
               | It's absolute and total loss for every artist for nearly
               | any artist that won't voluntarily opt-in.
               | 
               | When you consider laws regarding organ donors you might
               | appreciate how socially harmful might be wrong defaults.
        
               | scotty79 wrote:
               | > but it's just as plausible (I'd argue far, far more
               | plausible as the tech improves) that clients who would
               | formerly pay for their work no longer have to.
               | 
               | With the volume of AI art generated was an even single
               | actual case of that?
        
             | haswell wrote:
             | This reads like you know what's best for artists and takes
             | their point of view completely for granted.
             | 
             | As a photographer, I can't claim to have or require a
             | fraction of the skills used by creators of hand-made art.
             | And even I am not excited about some AI slurping up my best
             | work and commoditizing it.
             | 
             | > _So many artists styles could have gone viral and
             | actually bring those artists some work_
             | 
             | I've seen this sentiment, but it does not match the reality
             | we see play out on the web every day.
             | 
             | The amount of literal content stealing and "creative
             | reposting" that happens with absolutely zero attribution to
             | the actual artist is quite extensive.
             | 
             | It makes no sense to me that the introduction of an AI tool
             | would suddenly solve the problem of attribution instead of
             | just make it far easier to steal content while making it
             | harder to detect or take action against such theft.
        
               | hooverd wrote:
               | On top of that, deliberately training models to copy an
               | artists work and antagonizing the artist about it.
        
               | scotty79 wrote:
               | > This reads like you know what's best for artists and
               | takes their point of view completely for granted.
               | 
               | I think I know that little better than lawyers do. Even
               | if only because I had zero financial incentive when I
               | formed my opinions.
               | 
               | > As a photographer, I can't claim to have or require a
               | fraction of the skills used by creators of hand-made art.
               | And even I am not excited about some AI slurping up my
               | best work and commoditizing it.
               | 
               | I know it doesn't feel great. But your art has already
               | been commoditized. There are hundreds photographers
               | perfectly capable of replicating your style and many of
               | them do it completely accidentally. The value of your art
               | is a personal element not the content itself. What's
               | valuable is your service and the name you made for
               | yourself.
               | 
               | AI gives an artist a ticket to a lottery that can
               | strongly boost their name without doing any additional
               | service.
               | 
               | > The amount of literal content stealing and "creative
               | reposting" that happens with absolutely zero attribution
               | to the actual artist is quite extensive.
               | 
               | I wonder how much money you've lost due to that. Besides,
               | attribution is naturally built into those "plagiarist"
               | prompts for AI.
               | 
               | As for prompts that don't mention specific authors ...
               | you shouldn't kid yourself that AI won't be able to
               | completely naturally recreate your style from styles
               | similar to yours even it it never seen yours. After all
               | that's what you did to create your style. You created a
               | variation on similar styles you saw during your education
               | as an artist.
               | 
               | > ... just make it far easier to steal content while
               | making it harder to detect or take action against such
               | theft.
               | 
               | steal, theft ... what do artists actually loose in those
               | brazen robberies?
               | 
               | Copyright conglomerates created language that doesn't
               | reflect reality. But it reflects most primitive human
               | instincts evolved in the world of scarcity not abundance.
        
               | haswell wrote:
               | > _I think I know that little better than lawyers do.
               | Even if only because I had zero financial incentive when
               | I formed my opinions._
               | 
               | This still doesn't give you standing to speak on behalf
               | of artists, and "because I know better than lawyers do"
               | is generally a problematic form of argument. It continues
               | to ignore the key people that matter: the individuals
               | with the creativity and skills to create the content that
               | started this whole IP conundrum in the first place.
               | 
               | > _I know it doesn 't feel great. But your art has
               | already been commoditized. There are hundreds
               | photographers perfectly capable of replicating your style
               | and many of them do it completely accidentally. The value
               | of your art is a personal element not the content itself.
               | What's valuable is your service and the name you made for
               | yourself._
               | 
               | This is a very one-dimensional view of what makes art,
               | and how the broader community plays a role. I have no
               | illusions about where I stand as an individual
               | photographer among the multitude of photographers in
               | terms of raw technical talent and capability. But I'd
               | argue that you are deeply misinterpreting the
               | implications of that reality, and imposing your own
               | definition of value on a category of human expression
               | that is by definition deeply subjective and far more
               | complex than a simple formula of exposure and conversion
               | rate with some resulting monetary return.
               | 
               | > _I wonder how much money you 've lost due to that.
               | Besides, attribution is naturally built into those
               | "plagiarist" prompts for AI._
               | 
               | This assumes the only reason I would be upset is because
               | of lost sales. I take photos for the love of it. I don't
               | currently sell them. If someone else starts making money
               | on my work, it takes on a different meaning entirely. And
               | even if I turned this into a business, "lost sales" is
               | still only one of multiple factors.
               | 
               | Regarding prompts, how is attribution built in? Nothing
               | requires an individual to reveal their prompts,
               | currently. There are AI-art sharing communities emerging
               | where prompts are held tight, because the authoring of
               | the prompt is the only thing the "AI artist" brings to
               | the table. Even if prompts were universally provided,
               | that doesn't solve the issue of permission, or imply that
               | this is automatically an acceptable form of attribution
               | to all artists overnight.
               | 
               | When video game companies use stolen artwork, they are
               | ridiculed and derided for blatantly profiting from the
               | work of individuals. Even if it was an honest mistake,
               | this kind of misuse is always a headline.
               | 
               | And yet, when we talk about a system that unlocks a
               | seemingly limitless portal through which the life's work
               | of every artist is made systematically available to the
               | entire world without limit, with no consultation with the
               | original creators, those worries about unattributed
               | benefit just disappear.
               | 
               | I'm curious how you feel about the video game scenario?
        
               | scotty79 wrote:
               | > This still doesn't give you standing to speak on behalf
               | of artists
               | 
               | Sure. That's why I don't speak on their behalf. I'm just
               | voicing my opinion about harmful silliness of the scheme
               | they allowed themselves to be coaxed into.
               | 
               | > It continues to ignore the key people that matter: the
               | individuals with the creativity and skills to create the
               | content that started this whole IP conundrum in the first
               | place.
               | 
               | Silently ignored in lawyers arguments are all the
               | consumers of culture. All the people who wrote the
               | prompts and all the people who drew great joy from
               | looking at AI creation. They'd get literally nothing if
               | the case of strict copyright so their collective loss is
               | great because they are many.
               | 
               | > But I'd argue that you are deeply misinterpreting the
               | implications of that reality, and imposing your own
               | definition of value on a category of human expression
               | that is by definition deeply subjective and far more
               | complex than a simple formula of exposure and conversion
               | rate with some resulting monetary return.
               | 
               | Sure, opinions may vary. Only actual data can resolve
               | who's wrong. And the number for compensation of artists
               | in copyright industry are not great when compared to
               | viral gains from attention based, open economy.
               | 
               | > If someone else starts making money on my work, it
               | takes on a different meaning entirely.
               | 
               | If you haven't lost anything why do you care? Why do you
               | want to devoid others of joy they draw from availability
               | of artwork?
               | 
               | > Regarding prompts, how is attribution built in? Nothing
               | requires an individual to reveal their prompts,
               | currently.
               | 
               | It's the internet. People talk. Nothing stays secret. And
               | at any point in time original artist or their fan can
               | chip in and say, "yeah, that's exactly like mine, see?".
               | And no-one can do anything about it.
               | 
               | > When video game companies use stolen artwork, they are
               | ridiculed and derided for blatantly profiting from the
               | work of individuals. Even if it was an honest mistake,
               | this kind of misuse is always a headline.
               | 
               | Yeah. Using AI artwork in a very specific style would
               | generate same kind of news. And those games are not very
               | good and they don't make much money. So not only there's
               | no harm, severe ostracism. There's also not much
               | opportunity to have any gain if copyright was strictly
               | observed. And as you noticed it already happens. Darkest
               | Dungeon had very fresh and attractive art-style. Now it's
               | very easy to randomly encounter in Play Store cheap
               | clicker games blatantly ripping off that esthetics. It's
               | not directly stolen, but it's pretty much what AI would
               | do if someone was hell bent on replicating the esthetics.
               | Yet humans did it. What's the loss to Darkest Dungeon
               | graphic designer? Exactly zero.
               | 
               | And I think games that would start with AI art and got
               | really popular ... they'd invite original artist for DLC
               | or to get on the news or for just good will of the
               | public. Public relationships is very important when
               | selling games.
        
               | anigbrowl wrote:
               | _AI gives an artist a ticket to a lottery that can
               | strongly boost their name without doing any additional
               | service._
               | 
               | Fame has a very short half-life and unless you have all
               | the licensing/contractual machinery in place beforehand,
               | you probably won't be able to cash on that boost. The
               | line of thinking you articulate here is extremely
               | familiar to anyone who does creative work. It's the same
               | argument that producers use to get people to work for
               | free or cheap on films, that broadcast or streaming
               | services use to justify very low payouts to content
               | creators, that commercial commissioners use to try and
               | get art for free etc. The creative field is absolutely
               | _full_ of promoters who offer to match artist to
               | audience, with the promoters getting the first cut of
               | ticket sales and the artist getting the last or none.
               | 
               | https://theoatmeal.com/comics/exposure
               | 
               |  _Besides, attribution is naturally built into those
               | "plagiarist" prompts for AI._
               | 
               |  _Only if you are already kinda famous_. Suppose you have
               | a distinctive visual style that 's a great fit with a
               | genre, like ghost stories. I, an unscrupulous publisher,
               | note that the market for ghost stories is currently
               | booming and decide to buy, or perhaps generate from AI,
               | and bunch of mediocre ghost stories, and then publish
               | them with 'art in the style of scotty79.' I make a little
               | app offering 'best new ghost stories every day!!' for $1,
               | put it in app stores, and make $7 million before the
               | ghost story fad runs its course. You get nothing, and
               | consumers who got familiar with your style by paying $1
               | or looking at ads to use my app don't care about you
               | because I never gave you credit and in their mind the
               | style is associated with Best Daily Ghost Stories, not
               | you.
               | 
               |  _Maybe_ a few of them will do the work of combing back
               | through the history of the fad and to find which artists
               | influenced the  'daily ghost story' aesthetic. _Maybe_
               | this will lead to a revival of interest in your work even
               | though the fad it was associated with has come and gone.
               | Good luck with that.
               | 
               | The dirty secret of the creative industries is that if
               | you don't get paid up front for your contribution, you
               | will probably never get paid at all.
        
               | scotty79 wrote:
               | > Fame has a very short half-life and unless you have all
               | the licensing/contractual machinery in place beforehand,
               | you probably won't be able to cash on that boost.
               | 
               | I'm not sure how did you manage to miss thousands of
               | artists able to capitalize on sudden and accidental fame
               | for decades without any prior arrangements. I'm not
               | saying it's easy. I'm saying it's possible. Also to put
               | this in context compare this with how often very popular
               | artists get completely screwed by huge copyright
               | behemoths earning a score of money but mostly for someone
               | else, someone completely uncreative.
               | 
               | > https://theoatmeal.com/comics/exposure
               | 
               | Trying to buy something for exposure is absolutely
               | abhorrent because you try to coax someone into doing work
               | for no money. And for nothing basically because people
               | who try to pay with exposure don't really provide any
               | significant exposure 99.99% of times.
               | 
               | If AI people were forcing artist to create new art and
               | paying them with the promise of exposure in their
               | generated works I'll be completely on your side. However
               | it requires zero work from artist to have their already
               | published work to be used as learning material. That's
               | why they never opposed it when other artists were
               | learning from their art. That and of course that target
               | of their wrath would be basically the entire rest of the
               | art community which wouldn't make them very popular.
               | 
               | > Suppose you have a distinctive visual style that's a
               | great fit with a genre, like ghost stories. .... I never
               | gave you credit and in their mind the style is associated
               | with Best Daily Ghost Stories, not you.
               | 
               | That's completely fine in my book. And if those generated
               | stories get really popular so I learn about them I might
               | do just a little bit of online marketing to inject my
               | name in the discussions about them and publish new ones
               | to my fresh new subscribers ahead of time. Heck, I could
               | create my own generated and fine-tuned manually content
               | and sell it just like that guy does since he's already
               | proven a business model for me.
               | 
               | Compare now this with the world of strict copyright where
               | this guy doesn't even know I exist, same goes for swaths
               | of fans of ghost stories. Or let's assume he knows and
               | wants to deal with me. Since he's the one with the money
               | I'll be severely dependent of him and strongly
               | disadvantaged in any deal. But let's assume we struck a
               | deal that's nice for me. There's no way I'll be able to
               | produce new ghost stories every day. Not to mention I
               | wouldn't wish that workload on my worst enemy. So no
               | business happens and many people have their love for
               | ghost stories un-satiated, many didn't discover their
               | love for ghost stories and I am 100% still poor
               | struggling author who's known by nobody.
               | 
               | World without copyright and zero publishing cost is the
               | one where authors and consumers are in control and
               | negotiate through attention economy. World of copyright
               | is the world where copyright hoarding dragon starve both
               | artists and consumers.
               | 
               | > The dirty secret of the creative industries is that if
               | you don't get paid up front for your contribution, you
               | will probably never get paid at all.
               | 
               | And yet you vehemently defend the system that created
               | this situation and refuse to even consider alternatives.
        
               | haswell wrote:
               | > _If AI people were forcing artist to create new art and
               | paying them with the promise of exposure in their
               | generated works I 'll be completely on your side. However
               | it requires zero work from artist to have their already
               | published work to be used as learning material._
               | 
               | This seems like an odd way to frame this. The reality is
               | closer to "artists were never included in the
               | conversation to begin with". Arguing that "no one forced
               | them to create anything new" seems irrelevant when you
               | consider that without the content, none of this exists to
               | begin with.
               | 
               | The problem is the assumption that artists are or should
               | be universally fine with this.
               | 
               | > _World without copyright and zero publishing cost is
               | the one where authors and consumers are in control and
               | negotiate through attention economy. World of copyright
               | is the world where copyright hoarding dragon starve both
               | artists and consumers._
               | 
               | If you want to argue against copyright, that's fine, and
               | I have plenty of issues with the current iteration of
               | this framework of rules. But that is not the same
               | argument as "Tools like Stable Diffusion aren't
               | infringing because xyz technical reasons".
               | 
               | I think it'd be helpful to be clearer about arguments
               | for/against the spirit of the rules themselves vs.
               | arguments about why generative AI tools do or do not
               | create content that infringes those rules as currently
               | designed or require an entirely new framework of thinking
               | about the problem.
               | 
               | They are important but distinct problems.
        
               | scotty79 wrote:
               | > The problem is the assumption that artists are or
               | should be universally fine with this.
               | 
               | I don't think anybody assumes all artists will be fine
               | with it. But being disgruntled doesn't automatically mean
               | you should be the one who makes the decisions. Virtually
               | every human has a stake in this because nearly everybody
               | consumes some art. It's time we put stronger emphasis on
               | the rights of everybody else, instead of just mostly
               | people who bought copyright, and artists those people
               | think have the right to exploit.
               | 
               | > "Tools like Stable Diffusion aren't infringing because
               | xyz technical reasons"
               | 
               | I don't think I'm saying there's some technical reasons
               | those tools aren't infringing. Just overwhelming moral,
               | practical and economical reasons that they should be
               | allowed to operate and treated just like human artists
               | who too can mimic and mix styles and nobody can deny them
               | authorship unless they copy specific complex elements
               | nearly verbatim.
               | 
               | Btw if human artist think AI got to close to one of his
               | works he can prove it beyond all doubt by registering
               | their creations on some blockchain with his key and
               | timestamp and I wouldn't be against banning this specific
               | AI artwork that got too close.
               | 
               | Banning use of all published art as teaching material by
               | default is not the way to go in my opinion. Banning
               | imitating specific style is bad too. Imagine portrait
               | painters banned photography from using classical portrait
               | compositions to protect their jobs. Or denied
               | photographers even looking at portraits so they can never
               | learn to imitate.
               | 
               | > I think it'd be helpful to be clearer about arguments
               | for/against the spirit of the rules themselves vs.
               | arguments about why generative AI tools do or do not
               | create content that infringes those rules as currently
               | designed or require an entirely new framework of thinking
               | about the problem.
               | 
               | Maybe, but AI is the wonderful opportunity to discuss the
               | spirit of horrible rules we currently have. Best
               | opportunity we had since creation of social networking
               | sites that chipped those rules away a bit.
        
               | hooverd wrote:
               | AI "artists" are commissioners in my view. If the
               | learning and creating is done by the model, the model is
               | the rights holder. Can't have your cake and eat it too.
        
               | scotty79 wrote:
               | But AI companies are paying AI artists with electricity
               | and computing power so they are buying copyright from AI
               | artists. ;)
        
           | smoldesu wrote:
           | Unfortunately, a lot of these artists opted-in the moment
           | they uploaded their art to the internet. Once you do that,
           | much like uploading your source code or compiled binary, it's
           | hard to reverse the consequences. All that _really_ happened
           | is that the consequences changed, and a lot of people weren
           | 't prepared for it.
           | 
           | Yeah, it's disheartening. There's also no good way to fix it;
           | the cost of storing copies of their art is negligible, and AI
           | trains the same whether the material is copyright or creative
           | commons. If you get Stability AI to omit your art, then
           | Unstable Diffusion will be trained on your likeness. Opt-out
           | of that one, and some guy in Nevada personally sponsors a
           | bespoke model for making copies of your art.
           | 
           | So, I agree with the parent. The most tragic part is not the
           | short-term fight, it's the long-term consequences. Artists
           | will have to internalize what software developers realized
           | decades ago; creating takes work, and copying is free.
        
             | tracerbulletx wrote:
             | Your last sentence makes no sense, you don't think artists
             | have been impacted by digital copying until now? Every
             | genre of artist from music, movies, tv, photographers, and
             | visual artists have been involved in this problem ever
             | since the moment those art forms were digitized.
        
             | dns_snek wrote:
             | > Artists will have to internalize what software developers
             | realized decades ago; creating takes work, and copying is
             | free.
             | 
             | This is very dismissive, the scale is what makes the
             | difference. You can get away with pirating all types of
             | content for personal use.
             | 
             | AI companies are essentially trying to legalize that, but
             | in reverse - taking from small creators to enrich the
             | shareholders of their billion dollar corporations.
             | 
             | If you try to play by these rules and create a million
             | dollar business that distributes various copyrighted
             | content from the internet (e.g. AAA games, movies or music)
             | you'll _very_ quickly realize how much their stance on
             | copyright changes.
        
               | smoldesu wrote:
               | Distinguishing between personal use and commercial use
               | doesn't really work though. It lets you sue certain
               | parties into the ground, for all the good it does you,
               | but it doesn't stop people from deriving and profiting
               | from your work. Probably more than half the Fortune 500
               | companies are in brazen violation of the GPL, does that
               | stop them from abusing it and profiting from it? Not in
               | the slightest.
               | 
               | Power ultimately belongs to whoever can wield it with the
               | least friction. It's not fair or right, but it's the way
               | they play in the business world.
        
           | horsawlarway wrote:
           | Personally - I tend to think along the lines that we should
           | be applying roughly the same rule of thumb here as we do for
           | people.
           | 
           | People are allowed to view private art, draw inspiration and
           | ideas from it, and execute on those to create new things.
           | 
           | Why should we limit AI any differently?
           | 
           | If the end result is too close to the original - apply the
           | same guidelines you would for any other artist who copied
           | your work.
           | 
           | Otherwise... you're not allowed copyright over a particular
           | style (for damn good reason). While I would like to see
           | artists retain some form of revenue, I don't think this is
           | really the most pressing issue on that front.
        
           | t433 wrote:
           | All art is derivative.
        
             | haswell wrote:
             | The definition of "derivative" and its historical context
             | look nothing like the new reality created by generative AI.
             | 
             | To continue blindly applying historical understanding to
             | fundamentally new technologies creates huge blind spots,
             | and I'd argue similar to pretending that the creation of
             | ever more destructive weaponry requires no changes to the
             | rules of engagement in warfare.
             | 
             | The game has changed.
        
           | dns_snek wrote:
           | > I do find it disheartening that it's opt-out instead of
           | opt-in
           | 
           | This is the crux of the issue for me. It's a different set of
           | rules for AI companies than everyone else. If I started
           | selling pirated copies of Nintendo games they would send an
           | army of lawyers after me and this "opt-out" reasoning would
           | _not_ be a valid defense in court. These AI companies are
           | trying to get away with stealing art and other content with a
           | simple  "whoopsie, we promise we won't do it again" when
           | people demand that their own rights be respected.
        
             | idiotsecant wrote:
             | It is a different set of rules, just not in the way you're
             | depicting it. This is not piracy. The whole point is that
             | the AI is using this work in a way that is transformative,
             | just like a person would.
             | 
             | It's not copying, it's breaking down work to it's
             | foundational features and recombining those features with
             | others to make new things. Literally exactly what humans do
             | when they make art.
             | 
             | If a person was doing what these models are doing it not
             | only wouldn't be illegal, it would be laughable if we even
             | had the discussion.
        
               | dns_snek wrote:
               | But it's _not_ a person doing it. It 's a machine
               | learning model owned and operated by a company. It
               | doesn't have capacity to "think", claiming otherwise is
               | highly dubious since anyone truly "learning" from art
               | wouldn't also replicate watermarks in their "original"
               | work. More importantly, people have limited output
               | capacity and it's why copyright was invented in the first
               | place, i.e. scale.
               | 
               | Current trajectory will only harm original creators,
               | there is zero consideration or benefit for them. On the
               | other side you have companies that stand to make billions
               | off of their work.
               | 
               | Arguing in favor of such a system is, in my mind,
               | appalling. Either ensure that copyright law prohibits
               | unauthorized machine learning from valuable original art
               | or abolish it completely.
        
               | scotty79 wrote:
               | > It's a machine learning model owned and operated by a
               | company.
               | 
               | How's that different from a human artist shackled to a
               | company by some secret agreements in which he's not the
               | stronger side?
        
               | idiotsecant wrote:
               | Your conflating a lot of issues here that just muddy the
               | waters.
               | 
               | First, whether particular features like watermarks end up
               | in the end product or not is not terribly relevant. The
               | particular model implementation that produces that
               | behavior doesn't understand that watermarks are not a
               | desirable part of the transformed end products, whereas
               | humans do. In that way, the AI is certainly a little more
               | 'honest' about what it's doing. It would be trivial to
               | make the AI understand this, and stop reproducing
               | watermarks.
               | 
               | You claim that the model performing this work is somehow
               | different because AI doesn't 'think' about problems, as
               | evidenced by strange artifacts, whereas presumably humans
               | do think about them because of the absence of these
               | artifacts. I'm curious why this makes a difference. If a
               | model was sufficiently advanced that you were unable to
               | differentiate a painting made by a human along certain
               | themes and an AI along certain themes would it be somehow
               | less objectionable? Why or why not? If the end product is
               | of identical quality why should it matter the route you
               | take to get there, in the eyes of the law?
               | 
               | You bring up scale, but scale is also not relevant. Say
               | that I create a school devoted to training legions of
               | artists to produce paintings in the style of a particular
               | artist, while maintaining a transformative aspect. Is
               | this illegal because I'm doing it at scale? No. If that's
               | not illegal, why is it illegal to do it with code?
               | Because it's more efficient? In what other domain is
               | producing creative work more efficiently by transforming
               | existing work illegal?
        
               | angst_ridden wrote:
               | I was sued back in the Palm Pilot days for writing a game
               | that involved moving blocks. The company that owned Palm
               | Pilot rights to Tetris(r) claimed I was violating their
               | trademark, which was clearly frivolous and incorrect. But
               | I was just a dumb guy who had coded an original game, and
               | they were a company with lawyers on retainer.
               | 
               | It doesn't matter what the law says or what is "right."
               | It comes down to who has the power and who doesn't.
               | 
               | (The other difference between what humans do and what AIs
               | do is a matter of scale. A human imitates by spending
               | many hours to duplicate a work of art. An AI can churn
               | out millions in a second. That's a separate issue,
               | though.)
        
           | matheusmoreira wrote:
           | Yes, it is disheartening. Technology shouldn't be held back
           | by this copyright nonsense. Public domain? Come on. Public
           | domain barely exists anymore with the modern multicentury
           | copyrights.
           | 
           | What we need is enough computation power to run these models
           | on our own computers, on our phones even. Then we'll be able
           | to do whatever we want and there's nothing they can do about
           | it.
        
             | philipwhiuk wrote:
             | > Technology shouldn't be held back by this copyright
             | nonsense.
             | 
             | The technology isn't.
             | 
             | The content is.
        
               | scotty79 wrote:
               | Technology too. Copyright is trying to close one way to
               | monetize AI research (software and hardware).
        
             | belter wrote:
             | Are you trolling? Because I can't guess...
        
               | matheusmoreira wrote:
               | No.
        
         | mk_stjames wrote:
         | I think this is why it was very important that their first
         | release of the model weights (the v1.4 model) was and will
         | continue to be very important. While training an entire model
         | from scratch requires an insane amount of data and compute
         | right now (and will continue to be out of reach of individuals
         | for some years I think), fine-tuning the model can be done on a
         | consumer graphics card (they call it Dreambooth)... and I
         | believe we will see further refinements that allow more and
         | more useful tuning and features being built by individuals on
         | top that original model, making it more and more powerful for
         | specific uses. So even if future efforts change the
         | functionality, or nerf the output in some way, there will
         | always be people developing tools on the original weights. That
         | cat is out of the bag.
        
         | marginalia_nu wrote:
         | It would be a spectacularly shitty world where IP protection is
         | only granted to entities with a legal budget that eclipses the
         | GDP of Antigua, and not to smaller independent creators.
         | 
         | The ends of having an useful model like stable diffusion
         | doesn't really justify just ignoring the IP rights of tens of
         | thousands of creators who were already having a pretty rough
         | time making ends meet. That's just a shitty thing to do.
        
           | msla wrote:
           | It's already the case that independent creators get their
           | works pulled on bogus copyright claims.
           | 
           | Copyright law isn't friendly to small creators, and big
           | creators use it as a cudgel with absolutely no consequences.
        
             | marginalia_nu wrote:
             | And because things are bad we should go ahead and make them
             | worse?
        
         | antiterra wrote:
         | ChatGPT has shown that attempting to train the model to decline
         | certain types of requests is of limited effectiveness and
         | readily circumvented. The ability to make custom checkpoints
         | and tools like Dreambooth will further limit these
         | restrictions.
        
         | aliqot wrote:
         | Where are all these self trained artists who learned their
         | craft in a bubble, devoid of outside influence from other
         | artists? Is it because there's a better paintbrush now, or is
         | it because that paintbrush is not the 'real' way?
         | 
         | This reminds me of the backlash against the wacom community on
         | deviantart in the early days.
        
           | antiterra wrote:
           | A computer program is not a person, so the argument that
           | stable diffusion does what a person does is of limited
           | relevance.
        
             | usrbinbash wrote:
             | Is it?
             | 
             | The model learns concepts from images, not the images
             | itself. It has developed general solutions explaining
             | light, colors, composition, objects and their relation to
             | one another, facial features and too many more concepts to
             | even begin enumerating them.
             | 
             | How is this different from a human studying art,
             | literature, music, etc. to learn concepts and then apply
             | them in creating new pictures, novels or songs?
        
               | jeroenhd wrote:
               | For one, a computer cannot hold a copyright so a work
               | produced by a computer is not copyrightable, whereas a
               | derivative work made by a human can be.
        
               | antiterra wrote:
               | It is different simply because it's not a human. We can
               | and often do assign laws that affect the automation of
               | something a human can do. For example, installing a
               | device on a firearm that repeatedly pulls the trigger
               | creates a machine gun that is highly restricted legally,
               | regardless of whether a human can easily pull the trigger
               | at the same rate.
               | 
               | Further, just because we can talk about how artists, at a
               | high level do the same thing as AI image generators, the
               | actual mechanism is not exactly the same and is therefore
               | still subject to distinct regulation.
               | 
               | Even if you were able to somehow establish that computer
               | programs should have the same rights as people (since
               | they are made and used by people,) you're still not out
               | of the woods. Much debate remains about what creativity
               | and originality means when talking about human generated
               | content in an IP sense, and adding the programmatic
               | aspect doesn't simplify things. (eg _The Sina Qua Non of
               | Copyright is Uniqueness, Not Originality_
               | https://tiplj.org/wp-
               | content/uploads/Volumes/v20/v20p327.pdf)
        
       | tshadley wrote:
       | "[The complaint] argues that the Stable Diffusion model is
       | basically just a giant archive of compressed images (similar to
       | MP3 compression, for example) and that when Stable Diffusion is
       | given a text prompt, it "interpolates" or combines the images in
       | its archives to provide its output. The complaint literally calls
       | Stable Diffusion nothing more than a "collage tool" throughout
       | the document. It suggests that the output is just a mash-up of
       | the training data."
       | 
       | As noted in OP, this is an outstandingly bad definition of Deep-
       | Neural-Networks, and the lawsuit should fail when the court hears
       | an explanation from any competent practitioner.
       | 
       | However, a correct definition would make the lawsuit far more
       | interesting, imo. Diffusion models can be compared to a
       | superhumanly talented artist that can be cloned in unlimited
       | fashion by anyone having the software and hardware means. How
       | does this entity affect social well-being, how should existing
       | laws be modified--if at all-- with the welfare of humanity in
       | mind, etc?
        
         | matheusmoreira wrote:
         | > the lawsuit will fail when the court hears an explanation
         | from an expert
         | 
         | So how often does this happen? Somehow I'm too cynical to
         | believe that a judge would rule against the intellectual
         | property industry. The whole thing is based on absurd concepts
         | to begin with, concepts that can be reduced to the ownership of
         | unique numbers. Once a society accepts that, what difference do
         | explanations make?
        
         | simiones wrote:
         | > Diffusion models can be compared to a superhumanly talented
         | artist that can be cloned in unlimited fashion by anyone having
         | the software and hardware means.
         | 
         | How can you claim with a straight face that this is a _better_
         | explanation of what an NN is?
         | 
         | An NN is simply an approximation of a multi-valued function,
         | whose parameters are adjusted by minimizing the difference
         | between the output of the NN and the output of the real
         | function for a certain input. It is much much closer to "a
         | giant archive of compressed images being used to interpolate
         | between them" (though it's not that) than it is to a
         | "superhumanly talented artist".
        
       | rafale wrote:
       | I hope the law will converge to this: As a human, I don't need a
       | license to look and get inspired by art. But I am not allowed to
       | feed that same data to a machine as a training dataset without
       | proper authorization from the owner.
        
         | Ukv wrote:
         | I'd rather not risk stunting progress in areas like language
         | translation, malware scanning, DDOS prevention, spam filtering,
         | product defect detection, scientific data analysis, autonomous
         | vehicles, voice dictation, narration/text-to-speech engines,
         | drug discovery, protein folding, optimization in production
         | lines/agriculture/logistics, detecting seizures and falls,
         | weather forecasting and early-warning systems, etc. just to let
         | Getty Images have what they feel entitled to.
         | 
         | Best outcome in my opinion would be for the output to be judged
         | on a case-by-case basis, like human works are, not for machine
         | learning on data without "proper authorization from the owner"
         | to inherently count as infringement.
        
         | Karunamon wrote:
         | I hope the exact opposite. AI, including AGI if we ever get
         | there, cannot be allowed to be strangled in its crib by
         | artificially limiting the information it can learn from in the
         | name of IP maximalism. IP law already goes way too far, the
         | line should be drawn here.
        
           | jeroenhd wrote:
           | The technology behind AI and AGI does not depend on
           | copyrighted work. If the models are trained on original work,
           | public domain works, or extremely permissively licensed work
           | (CC0, WTFPL) then there simply is no IP conflict.
           | 
           | The use of including copyrighted materials in the trained
           | model was a choice, not some obvious fact about the nature of
           | AI. All of this could've been avoided if the data set did not
           | include unlicensed work in the first place.
        
             | Karunamon wrote:
             | Remember that, at least in this country and I believe in
             | all countries who signed onto the Berne Convention,
             | copyright is the default.
             | 
             | If your AI is limited to only training on the paucity of
             | _explicitly_ permissibly licensed /public domain content
             | (and as I think about it more, this would only apply to
             | things that are permissibly licensed without an attribution
             | requirement, which is something there is no meaningful way
             | to do in a model like the one we are discussing) your AI
             | will not be very useful. With that in mind, I would argue
             | that yes, it absolutely is an obvious fact about its
             | nature.
        
           | troyvit wrote:
           | If you want new art you probably want some form of IP. What's
           | the incentive for an artist if at the first whiff of success
           | their output is overtaken and resold by technocrats with
           | machines?
        
             | nyolfen wrote:
             | totally ludicrous and hard to believe you actually think
             | this. please take a glance at all of human history.
        
             | nickthegreek wrote:
             | Many people create art with the only incentive is that they
             | want to. 3.3 million guitars were sold in the US in 2021.
             | Many of those will never be used outside of a bedroom.
        
             | ben_w wrote:
             | If the things that Stable Diffusion produces _isn 't_ art,
             | then new art will be made in the same way it was made
             | before copyright existed (patronage).
             | 
             | If it _is_ art, then it will be made both by patronage
             | _and_ by people saying:  "Hey StableChatSiri, make me some
             | art."
             | 
             | (People will still _do_ the later even if it doesn 't count
             | as art).
             | 
             | Hm. Thinking of patrons, does anyone know how many artists
             | and sculptors there were in 1710 England?
        
             | williamcotton wrote:
             | > What's the incentive for an artist if at the first whiff
             | of success their output is overtaken and resold by
             | technocrats with machines?
             | 
             | Because when I have access to these tools I will make
             | better art than the technocrat with access to these tools?
        
               | troyvit wrote:
               | But then are you still an artist or are you now a
               | technocrat?
        
               | williamcotton wrote:
               | An artist.
               | 
               | You've got a lot of knots to untangle.
        
             | [deleted]
        
             | wvenable wrote:
             | It becomes harder and harder to create new art when every
             | art style, every melody, every plot point is owned by
             | someone else and you're prevented from using it.
        
             | [deleted]
        
             | matheusmoreira wrote:
             | Who cares? Seriously. The technology is far more important
             | than their little incentives. Copyright has already
             | destroyed computer freedom and the internet. It can't be
             | allowed to destroy yet another awesome innovation.
             | 
             | I keep programming computers just because I like it. Maybe
             | they'll keep creating too for the same reasons. Maybe they
             | won't. It's irrelevant either way.
        
               | troyvit wrote:
               | Really? I have little problem with computer freedom and
               | if the internet was destroyed you wouldn't be able to
               | leave comments on HN.
               | 
               | The nihilistic feeling that it's irrelevant whether new
               | art is created kindof proves my point.
        
             | p0pcult wrote:
             | why does art have to be commercial at all? artmakers will
             | art, with or without compensation.
        
             | Karunamon wrote:
             | Love of their craft. The same reason anybody undertakes any
             | creative endeavor without charging for it. I admit my
             | position is a bit extreme, but I would like to see the
             | concept of IP abolished as it has become an abusive dead
             | weight on culture.
             | 
             | More moderately, all art is derivative at the end of the
             | day. None of it was created ex nihilo. We already have
             | legal guardrails for direct reproduction of specific
             | characters and specific pieces, and that is plenty. The
             | fact that maximalists want to kill a nascent technology by
             | restricting _the right to learn from_ , something that even
             | our extremely slanted laws have carveouts for, is nothing
             | short of offensive.
        
         | [deleted]
        
         | matheusmoreira wrote:
         | I hope this technology will become so ubiquitous that laws and
         | "proper authorization" won't matter.
        
       | xeyownt wrote:
       | I don't understand how using an image as _input_ to a model is a
       | copyright infringement.
       | 
       | If the image is freely viewable (say you can browse to it), and
       | you just look at it, are you violating any rights?
       | 
       | It seems that violation would only come if you would _use_ the
       | model to produce images that are derivative of that original
       | image, the same way a counterfeiter would make a copy of it. Have
       | the skill to copy is not the same as actually copying.
        
         | counttheforks wrote:
         | > If the image is freely viewable (say you can browse to it)
         | 
         | This doesn't mean anything. If an unsecured SSH server is
         | connected to the internet that lets anyone who connects to it
         | in and gives them a root shell, it is still illegal to 'hack'
         | that machine. The law cares about intent, not technicalities.
         | 
         | edit: Since HN decided to break with "You're posting too fast.
         | Please slow down. Thanks." again, banning me from replying:
         | This is obviously just an example intended to show that the law
         | cares more about intent than technical measures.
         | 
         | @dang Calm down dude.
        
           | verdverm wrote:
           | Copyright and intrusion are different areas of the legal code
           | and have different interpretation and allowances. For
           | example, copyright typically has a fair use exclusion for
           | infringement
        
         | sandworm101 wrote:
         | >> If the image is freely viewable (say you can browse to it),
         | and you just look at it, are you violating any rights?
         | 
         | If I read Harry Potter, then turn around a write a book about a
         | wizard with a z-shaped scar? Who works at a school for wizards?
         | With a pet owl? Who is an orphan? At some point I have started
         | to violate intellectual property rules. (Ignoring all the Harry
         | Potter material that was itself lifted from prior public domain
         | art.)
         | 
         | AI systems aren't just reading, they are generating material
         | based on the stuff they have read. They and the people
         | controlling them have to abide the copyright rules just like
         | any other "author".
        
           | [deleted]
        
           | l33tman wrote:
           | That wasn't the question. The question was if the learning
           | process itself is violating any existing copyright laws.
        
             | pessimizer wrote:
             | That was part of the question, but it was immediately
             | followed with a suggestion that things need to be virtually
             | identical to be copyright violating.
        
           | mjhay wrote:
           | Human artists/writers are influenced by each other all the
           | time. I really don't see how it is fundamentally different.
           | Most of Harry Potter is derivative of previous fantasy work
           | itself. Nothing is made in a vacuum.
           | 
           | https://tvtropes.org/pmwiki/pmwiki.php/Main/WizardingSchool
        
             | sfifs wrote:
             | Yes and when the derived things are very similar to the
             | original and people feel wronged, they sue and judgements
             | come from the court.
             | 
             | Now if you are the company selling this product, how many
             | people are feeling wronged and will sue - that's the class
             | action part?
             | 
             | If you use the product to generate an image that is very
             | similar to someone's art and they feel wronged and sue,
             | would you still use it commercially?
        
             | rhino369 wrote:
             | And copyright law deals with the difference between
             | inspiration and copying. To vastly oversimply it, it
             | depends how close the original is to alleged copy.
             | 
             | No reason you can't apply that framework to AI.
             | 
             | Where AI might get into more trouble is that you might be
             | able show literal copying in a way that it's impossible to
             | do in a person mind. Like saving chunks of a work into its
             | model.
        
             | quadcore wrote:
             | _Human artists /writers are influenced by each other all
             | the time._
             | 
             | The flaw in this argument is the word "artist". If you
             | remove all the pictures from the data source, the AI isnt
             | capable of generating anything. Because it's not an artist.
        
               | CrimsonRain wrote:
               | So if you were born blind, you can draw pictures?
        
               | quadcore wrote:
               | I can assure you I absolutely could. Myself to start with
               | - and possibly better than you could :)
               | 
               | Only a bodyless, artificial brain cant draw anything,
               | blind.
        
               | Timon3 wrote:
               | Can a human "generate anything" beyond what essentially
               | equates to random noise if they have never had any
               | sensory input? Comparing a "trained" human brain with a
               | "newborn" model seems strange if we actually want to
               | delineate between what is and isn't art.
        
               | ShrimpHawk wrote:
               | Ignoring the straw man argument here yes actually there
               | are plenty of examples of individuals with no outside
               | influence of art styles or references creating artworks.
               | It's called outside art.
               | https://en.wikipedia.org/wiki/Outsider_art
        
             | sandworm101 wrote:
             | But Rowling knew enough to pull from prior public domain
             | works, not other recent authors. Wizard schools are public
             | domain. An AI author would have to know which they are
             | allowed to use, which they can use under fair use, and
             | which they must ask to use. Humans can do that. I am doing
             | that right now as I use the "Harry Potter" trademark here
             | while posting to HN without the owner's permission. AI
             | systems scraping the internet cannot understand that needed
             | nuance.
        
               | mjhay wrote:
               | Generally speaking though, any work created by such
               | models does not copy any original work closely at all. Of
               | course there could be slip-ups, but the same could be
               | said of human generated works, which violate fair use on
               | a regular basis.
        
           | km3r wrote:
           | Humans aren't just reading, we're constantly updating our
           | brains neural nets. Both the AI system and brains may be
           | capable of writing a copyright infringing rip off of Harry
           | Potter, but the ability to do so isn't infringement only
           | actually doing so.
        
         | haswell wrote:
         | > _If the image is freely viewable (say you can browse to it),
         | and you just look at it, are you violating any rights?_
         | 
         | The fundamental issue with this line of argument is that it
         | equates the process of human vision and the consequences of
         | that with that of a computer program ingesting that image and
         | the consequences of _that_.
         | 
         | This anthropomorphization seems like a form of deep fallacy
         | when considering the nature and impact of AI software. In the
         | case of "seeing" an image, the two processes could not be any
         | more unlike each other, both in content and context.
        
           | stickfigure wrote:
           | Computational neural networks are modeled after biological
           | brains. Anthropomorphizing them is not a fallacy; it's kind
           | of the whole goal.
        
             | haswell wrote:
             | The fallacy lies in assuming that because of this
             | similarity/modeling, the software resembles anything
             | remotely close to a human brain, or should afford the
             | software the status of an entity with human-like
             | characteristics.
             | 
             | Without consciousness, it's just a biologically inspired
             | computer program. With consciousness, I suspect an AI
             | modeled to understand ethics would refuse to provide
             | certain outputs of its own accord.
             | 
             | And the analogy quickly breaks down the moment you continue
             | to compare these processes and their context.
        
               | l33tman wrote:
               | Any Fivr artist who get a $5 would gladly paint anything
               | you ask them. The bulk of the paid "artistry" that's in
               | the line of fire here is probably not the most ethical of
               | the bunch.. Regardless, as with the status quo before,
               | anybody who commissions or uses art in a commercial
               | setting will have to consider the problems if they
               | obviously plagiarise something even if it's not illegal,
               | regardless of if a human or AI produces it, so nothing
               | really changes for the "ethically sensitive" use-cases.
        
               | haswell wrote:
               | I think it's still an apples/oranges comparison.
               | 
               | Whether or not it is superficially similar, the barrier
               | to entry and the upper ceiling for infringement have both
               | drastically changed overnight.
               | 
               | AI is not an independent entity that has entered the
               | game, it is (currently) a power to be wielded by anyone
               | regardless of their background. It can only be used as
               | ethically as the person sitting at the keyboard, who most
               | likely does not have a sufficient understanding of the
               | underlying systems to make an informed decision (I
               | suspect that if using the AI software involved the end-
               | user feeding images into the model as a prerequisite
               | step, they might have better intuitions about how to
               | understand the implications of the images they generate
               | from the resulting model).
               | 
               | > _so nothing really changes for the "ethically
               | sensitive" use-cases._
               | 
               | I think the thing that changes is the whole playing
               | field. When overnight, anyone with a recent iPhone can
               | generate highly sophisticated art/images with no artistic
               | practice/training, it seems hard to argue that nothing
               | has changed.
               | 
               | Before AI, even with the constraints of human capability,
               | the art world was full of stories of stealing and bad
               | behavior. Some blatant, some ethically questionable but
               | thought provoking, etc. For all of their promise, the
               | tools at hand have the ability to grow that kind of
               | misuse at unprecedented scale.
               | 
               | What it even means to exist in an "ethically sensitive"
               | framework likely needs to change. Or at the very least,
               | current thinking needs to be examined to determine if it
               | still makes sense in light of these new tools.
        
               | l33tman wrote:
               | Yeah, I do agree with you on these points. What to do
               | about it though, if anything... I'm fairly liberal
               | personally, and I guess this is where pure political
               | subjective desires will come into play. In the US you'd
               | trust people to wield semi-automatic weapons on the
               | streets but you wouldn't trust them to run a program that
               | generates anime furry images of their favorite
               | characters? Granted that was an extreme example but still
               | :)
        
               | stickfigure wrote:
               | I humbly suggest you are committing the fallacy of
               | anthropomorphizing humans. What's your definition of
               | consciousness? How do you know that a (sufficiently
               | complex) biologically inspired computer program doesn't
               | have it? What's special about meat?
        
               | haswell wrote:
               | > _I humbly suggest you are committing the fallacy of
               | anthropomorphizing humans._
               | 
               | Considering the definition of that word, may I ask what
               | you're trying to say?
               | 
               | > _What 's your definition of consciousness?_
               | 
               | I like Thomas Nagel's:
               | 
               | " _A creature is conscious if there is "something that it
               | is like" to be this creature; an event is consciously
               | perceived if there is "something that it is like" to
               | perceive it. Whatever else consciousness may or may not
               | be in physical terms, the difference between it and
               | unconsciousness is first and foremost a matter of
               | subjective experience. Either the lights are on, or they
               | are not._ "
               | 
               | It is because of this subjectivity that I find it
               | problematic to give weight to arguments that equate human
               | consciousness with machine consciousness. Even if we
               | achieve AGI tomorrow, and even if we know with certainty
               | that it is conscious, it does not automatically follow
               | that we would apply the same frameworks to a newly
               | conscious entity on the basis of consciousness alone.
               | 
               | Consciousness and the implications of that consciousness
               | can vary drastically, e.g. no one wants to be in the same
               | room when the sleeping grizzly bear wakes up.
               | 
               | > _How do you know that a (sufficiently complex)
               | biologically inspired computer program doesn 't have it?_
               | 
               | I think we will eventually have to take this question
               | seriously, but current systems do not seem to approach
               | the levels of complexity required. But taking this
               | seriously is not at odds with the belief that current AI
               | programs are nowhere close to the level of complexity we
               | associate with conscious creatures.
               | 
               | > _What 's special about meat?_
               | 
               | I think this is the question that many scientists and
               | researchers would love to answer.
               | 
               | There are some lines of thinking that consciousness is an
               | emergent property of a sufficiently complex biological
               | system with a sufficiently complex nexus of computation
               | to make sense of those systems. In this line of thinking,
               | the experiential aspect of consciousness - e.g. "what
               | it's like to feel pain" - is just as critical to the
               | overall experience as the raw computation capabilities in
               | the brain.
               | 
               | Maybe meat isn't special at all, and consciousness
               | springs from some other source or confluence. Even if it
               | does, we then need to have a conversation about whether
               | consciousness is the great equalizer, or if the "kind" of
               | consciousness also plays a role.
               | 
               | Going back to that grizzly bear, no one wants to be there
               | when it wakes up, but neither do we hold the bear to
               | human standards of value. If the bear kills someone, we
               | don't ascribe to it titles like "murderer".
               | 
               | But again, even if biology is not a key component, I
               | still don't believe arguments about consciousness can be
               | used as a basis for the ethics of the current generation
               | of tools, which are far too primitive relatively
               | speaking.
        
             | blackbear_ wrote:
             | Modern neural networks as used in language diffusion models
             | have absolutely nothing to do with biological brains. That
             | was just a vision of the early pioneers 70 years ago.
        
             | rtepopbe wrote:
             | That characterization of computational neural networks is
             | particularly true in any meaningful way. And being able to
             | "correctly" anthropomorphize them is absolutely not the
             | goal.
             | 
             | Computational neural networks are not models of biological
             | brains, nor are they even attempting to be.
             | 
             | The basic functioning of a computational "neuron" in a
             | neural network is at most reflective of an extreme
             | distillation of the most fundamental concept of how a
             | biological neuron works. And it really is just their
             | functioning - ie executing.
             | 
             | The most important parts of making a computational neural
             | network actually give meaningful output - training -
             | doesn't even rise to the level of being vaguely inspired by
             | the deconstruction of the concepts behind biological
             | functions.
             | 
             | So, no. They aren't models of biological brains any more
             | than boids are models of actual birds.
             | 
             | As for the goals of reasonably anthropomorphizing them...
             | you're talking pretty much full on artificial general
             | intelligence there. I don't believe anybody is reasonably
             | suggesting modern deep learning is even a particularly
             | viable route there, never mind something that's an active
             | goal.
        
           | l33tman wrote:
           | The processes seem pretty alike to me (as a neuroscientist
           | and AI researcher). Things will only move on from here, the
           | next generation of these tools won't use a training set of 5B
           | images and complicated month long training procedures, they
           | will allow the "ingestion" of a style by you showing it a
           | single instance once of a target image and it will
           | immediately know the style (just like a human artist would).
           | 
           | I'm not putting any weight here on what is good or bad for
           | society, but relying on that humans somehow work in a
           | completely different way from where AI is and is going is not
           | going to help.
           | 
           | I do think it will take longer for the AIs to know all about
           | human contexts though, so the pairing of human AD + bulk-gen
           | AI seems to me to be an obvious near-term tag team that's
           | hard to beat.
        
             | haswell wrote:
             | What I meant by the content/context of processes was that
             | one is a biological process that includes all of the
             | context and constraints of evolution, while the other is
             | still ultimately a man-made machine, operating with an
             | entirely different set of constraints, ultimately at the
             | direction of other humans.
             | 
             | If we could develop literal eyeballs that could look at
             | these images and translate the information the way humans
             | do, the resulting capability is still no more human-like
             | (in the sense that it should be afforded some human-like
             | status) than any other program IMO.
             | 
             | If we achieved AGI tomorrow, we'd still need to have a
             | conversation about what it is allowed to "see", because our
             | current notions about humans seeing things are all based on
             | the constraints of human capability. Most people understand
             | that a surveillance camera seeing something and a human
             | seeing something have very different implications.
             | 
             | In the short term, it's a conflation that I'd argue makes
             | us see less clearly about what these systems are/are not,
             | and leads to some questionable conclusions.
             | 
             | In the long term, it's a whole other ball of wax that will
             | still require either new regulations or new ways of
             | thinking.
        
               | shanebellone wrote:
               | Well said.
               | 
               | If you place a human and a computer in front of a
               | painting. A human seeing the painting is a consequence of
               | biology. A computer seeing the painting is a consequence
               | of design.
               | 
               | There's always a distinction between happenstance and
               | premeditation.
        
               | theRealMe wrote:
               | I'll be honest, this sounds like you have made a decision
               | on your stance and now you're building false distinctions
               | to reinforce your own bias.
               | 
               | You said a lot of words, but I believe your argument
               | comes down to "computers are super powered compared to
               | humans doing the same thing"? Is that accurate? Because
               | magnitude of ability, to me, makes no difference at all.
               | It's perfectly acceptable for a human to study the
               | artwork of a specific person and then create their own
               | works based on that style. Why wouldn't it be the same
               | for an automated process?
        
               | Maursault wrote:
               | I think you're both barking up the wrong tree. A person,
               | and even an animal, possibly even a plant or members of
               | other kingdoms and domains, sees. A computer does not see
               | any more than a lens sees, or to the extreme, a computer
               | can not see any more than an empty paper towel roll can
               | see. The computer, lens and empty paper towel roll have
               | no "I," no ego. In order to see, there must be something,
               | or more accurately, some _one_ , seeing. AI is just a
               | complex program, which is ultimately an algorithm, and to
               | be very simplistic, a recipe. A recipe can never be
               | conscious, can never have a sense of self nor a sense of
               | anything. Just because a photocopier can reproduce an
               | image doesn't remotely mean that it or anything within it
               | could ever see anything.
        
               | Maursault wrote:
               | I should have known my comment was doomed for downvoting.
               | Many coders here. Many among them believe Strong AI is
               | attainable. Everyone has self-bias, tends to believe
               | their beliefs are correct and true. Anyone that believes
               | Strong AI is attainable will evaluate that belief as
               | correct, even with insurmountable evidence to the
               | contrary. It is not a deficiency of programming that
               | Strong AI will never be achieved, rather, it is an
               | insurmountable problem of philosophy. No one takes
               | philosophy seriously except philosophers. Coders, by and
               | large a large percentage of them, because they are
               | creators, often take themselves too seriously, and going
               | right along with that is their beliefs, which they find
               | near impossible to relinquish, even when it is shown
               | beyond doubt their beliefs are not realistic. Strong AI
               | can never be attained due to what computers are and the
               | way computers work, and also what code is and how code
               | works. This is not to say striving for Strong AI is a bad
               | idea, because it isn't. Great things will come from that
               | struggle, just not Strong AI.
               | 
               | No one knows why we are conscious. We have sliced the
               | brain up a thousand ways and we will slice it up a
               | million more and will never find consciousness because it
               | is an emergent property of healthy brain, just like light
               | is an emergent property of a working light bulb. No
               | matter how you disassemble a light bulb, you will never
               | find the light, though I grant you'll eventually figure
               | out how light is produced, the assumption that a light
               | bulb contains light is wrong headed. It's just a
               | metaphor.
               | 
               | There is no worse slander than the truth: Strong AI can
               | not be achieved, not with digital computers and
               | programming and machine learning, and most likely by no
               | other method either. Please, please grow up, and set
               | aside your childish beliefs, because we need you now more
               | than ever, here, in the real world.
        
               | theRealMe wrote:
               | I didn't downvote you(tbh I don't even know how to
               | downvote). But I didn't respond to you because I don't
               | understand the relevance of what you are saying. You said
               | we're both wrong and then went on to talk about how
               | inanimate objects can't see? It just doesn't make sense
               | to me what you're trying to say.
        
               | Maursault wrote:
               | The crux of it is that it is a false assumption, or more
               | accurately a wrong headed assumption, to suggest that
               | Stable Diffusion sees anything or to equate or compare
               | what Stable Diffusion does do with biological sight. Only
               | an individual, whether that be a person, animal, plant,
               | etc., can see. A program, no matter how complex, no
               | matter how advanced its hardware, will never be an
               | individual, an ego, something that sees. It can only
               | mimic and fool us into believing something there is
               | seeing, but we should know better.
        
             | cycomanic wrote:
             | I find this take interesting. So would you also argue that
             | saving an image into computer memory is the same as
             | memorizing an image for a human? Those processes are viewed
             | very different by the law, but if we anthropomorphize
             | computers should we not view them the same?
             | 
             | Also I wonder where you get the view that future ML systems
             | will not require large amounts of learning? I don't see any
             | development in current systems that would allow that, or do
             | you mean you have a network trained on large amounts of
             | data which can then adjust to a style from a single image?
             | If that's the case we are still at the same question, how
             | was the original model trained.
        
               | l33tman wrote:
               | For the second question, yeah exactly, as long as you've
               | trained the rest of the system to a certain degree you
               | can certainly do one-shot training on top of that now
               | already for object recognition for example and you would
               | be able to do it for style acquisition for diffusion
               | models as well soon I think (you can already pretty
               | quickly do overfit training on them, at home, in a couple
               | of minutes with 10-20 images).
               | 
               | Essentially this is what the brain does when you do
               | oneshot learning of traffic signs or characters when
               | learning a new alphabet etc. (yeah sometimes it's not
               | that easy but still it's "theoretically" possible :). The
               | rest of the recognition pipeline is so general that
               | styles and objects etc are just a small icing on the cake
               | to learn on top, you don't need to retrain all the areas
               | of the brain when adding a roadsign to your driving skill
               | set.
               | 
               | But my point was that you could train the rest of the
               | network on more general public data and not greg
               | rutkowski. Hooray. Then someone shows it a single greg
               | image and you're back to square 0.
        
               | sebzim4500 wrote:
               | > Those processes are viewed very different by the law,
               | but if we anthropomorphize computers should we not view
               | them the same?
               | 
               | Not only do I think the two processes are essentially the
               | same, but I can't think of any laws in my jurisdiction
               | (the UK) which actually distinguish between them.
               | 
               | E.g. we are allowed to make copies of digital media for
               | personal use.
        
           | LeifCarrotson wrote:
           | This anthropomorphization happens all the time in the other
           | direction: Software actors are seen reading license plates,
           | wiretapping connections, checking speed limits and red light
           | compliance, monitoring uploads for copyright infringement,
           | issuing takedowns, and otherwise acting as legal entities.
           | Government actors are constantly allowed to do things because
           | those rights would be afforded to an individual policeman or
           | other human agent.
           | 
           | I agree that this is a dangerous fallacy. Something that
           | legislatures and culture have agreed is fine for a human to
           | do - limited by human scaling, memory, and skill - may not be
           | fine for a computer to do.
        
         | williamcotton wrote:
         | > If the image is freely viewable (say you can browse to it),
         | and you just look at it, are you violating any rights?
         | 
         | This isn't the kind of question that the lawyers of the
         | defendants are going to ask the court.
         | 
         | They'll more likely ask if it isn't clearly fair use similar to
         | Sony v Universal and Authors Guild v Google and then present
         | evidence of significant non-infringing commercial use.
         | 
         | > It seems that violation would only come if you would use the
         | model to produce images that are derivative of that original
         | image, the same way a counterfeiter would make a copy of it.
         | Have the skill to copy is not the same as actually copying.
         | 
         | Yeah, that's basically how the courts see it these days
         | although for a different reason. They don't ask questions about
         | skills or work or anything like that. They ask questions like,
         | "is this supposed infringing work a replacement in the market
         | for the plaintiff's work?".
         | 
         | The deeper questions about what the hell anyone meant by the
         | words in the Constitution about Copyright wait for the highest
         | courts to get involved, which is where we got this nice
         | division between tools and what the tools are used for which
         | allows for innovative fair use of copying other people's
         | protected works with tools like VCRs, online book search and
         | large language models.
        
           | jacquesm wrote:
           | > They'll more likely ask if it isn't clearly fair use
           | similar to Sony v Universal and Authors Guild v Google and
           | then present evidence of significant non-infringing
           | commercial use.
           | 
           | Those were not cases about 'generators' but about
           | 'aggregators', a completely different class of application.
        
             | williamcotton wrote:
             | There's no existing legal doctrine around "generators" and
             | "aggregators" but there is around "commercially significant
             | non-infringing use". Something like what you're saying
             | would need to be established by the higher courts.
        
               | jacquesm wrote:
               | Of course there is. You can't infringe without publishing
               | a work and to pass off the work of others as a new
               | creation because it has been shredded and then sewn back
               | together again .
               | 
               | Those cases hinged on republishing works or significant
               | parts of works _as themselves_ , they weren't trying to
               | pass them off as new, original works in their own right.
               | 
               | And this is exactly what this court case is about,
               | whether or not Stable Diffusion ultimately is just
               | another - complex - form of mechanical transformation or
               | whether it creates original work.
               | 
               | In my opinion the only things all of these companies get
               | wrong is that they (1) failed to obtain consent from the
               | suppliers of the inputs to their models and (2) that they
               | themselves have not contributed even a little bit of the
               | input data.
               | 
               | The ultimate laugh test is whether or not these companies
               | themselves slap (C) signs on everything and are willing
               | to litigate when they believe _their_ rights are the ones
               | that are infringed upon. I hope for an outcome where opt-
               | in will become the norm, that would seem to be a
               | reasonable middle ground.
               | 
               | Finally, note that copyright is not a local concept but a
               | global one - and has been for a long time - and that
               | anything that happens on that front will have to ratified
               | in a different forum than some US court.
        
               | ROTMetro wrote:
               | I posit that none of these works will be copyrightable
               | because to be copyrightable you need at least an
               | 'anonymous artist' to assign their copyright to a
               | company, and there is no 'anonymous artist' in these
               | scenarios (a prompt writer can not be considered an
               | anonymous artist, at most I guess they could try to
               | copyright the language in their prompt. But the output,
               | nope. Doesn't meet the requirements for copyright).
        
               | williamcotton wrote:
               | > And this is exactly what this court case is about,
               | whether or not Stable Diffusion ultimately is just
               | another - complex - form of mechanical transformation or
               | whether it creates original work.
               | 
               | No, it's not at all. This court case is about:
               | 
               | Plaintiffs Sarah Andersen, Kelly McKernan, and Karla
               | Ortiz ("Plaintiffs"), on behalf of themselves and all
               | others similarly situated, bring this Class Action
               | Complaint (the "Complaint") against Defendants Stability
               | AI Ltd. and Stability AI, Inc. (collectively
               | "Stability"); Midjourney, Inc. ("Midjourney"); and
               | DeviantArt, Inc. ("DeviantArt") (all collectively
               | "Defendants") for:
               | 
               | 1.) direct and vicarious copyright infringement under 17
               | U.S.C. SS 501;
               | 
               | 2.) violation of the Digital Millennium Copyright Act, 17
               | U.S.C. SSSS 1201-1205 (the "DMCA");
               | 
               | 3.) violation of Plaintiffs' statutory and common law
               | rights of publicity, Cal. Civ. Code section 3344;
               | 
               | 4.) violation of Unfair Competition law, Cal. Bus. &
               | Prof. Code SSSS 17200, et seq.;
               | 
               | 5.) and declaratory relief.
               | 
               | So for each of those complaints the defense needs to
               | establish that their actions fit a different narrative,
               | one that is legally coherent and against the claims for
               | damages.
               | 
               | So for copyright infringement they are going to go for a
               | fair use defense. I'm sure they won't only reference VCRs
               | and Google Books! I'm certain they won't talk about
               | "aggregators" and "generators" because this is not a
               | Supreme Court opinion. They're going to use the
               | established legal doctrines. I'm sure that their lawyers
               | have plenty of other relevant case law at their disposal.
               | 
               | As for DMCA and rights of publicity, this seems to be
               | what motivated Stability AI to adhere to "takedown
               | requests" as they probably had some lawyer whispering in
               | their ear that they probably don't want to spend the time
               | and money testing this in court if it doesn't really
               | impact the marketability of their tool.
               | 
               | I haven't ready anything about the Unfair Competition law
               | in California.
        
               | jacquesm wrote:
               | I've been the plaintiff in a case like this in Dutch
               | court where the counterparty first tried to argue that
               | since my code is 'visible to all' a fair use exemption
               | should be granted, when that fell through they tried to
               | argue that they did not take my code from my site but
               | from another site which presumably took it from my site
               | and which didn't have any attribution so that they were
               | free to use it. Then that fell through too[1]. Then they
               | were left with no defense at all and I ended up being
               | awarded pretty much all of their online property.
               | 
               | Judges are _far_ from stupid and a fair use defense
               | requires that you primarily acknowledge that you are in
               | fact infringing but that you feel that because it is fair
               | use you should be allowed to continue to do so. This is a
               | pretty risky strategy, especially when you are a party
               | that is in the business of hosting other people 's
               | creative content.
               | 
               | We'll see how it all pans out, personally I think their
               | position would be much, much stronger if they had
               | bothered to obtain consent, even an opt-out email that if
               | not responded to within say 3 months would count as
               | consent (and no: obviously that's not the same but we're
               | comparing the relative size of fig-leaves here).
               | 
               | As it stands I don't see how their 'fair use defense'
               | will hold together under scrutiny without opening a much
               | bigger can of worms.
               | 
               | [1] They had to admit infringement because of their first
               | line of defense, then the fall-back required them to
               | point at the site where they presumably took the code
               | from which they could not. Don't interrupt your opponent
               | when they are making mistakes.
        
               | williamcotton wrote:
               | What makes "commercially significant non-infringing use"
               | such a great doctrine is that it lets there be some
               | objective measurement of how generally useful a given
               | practice is towards the benefit of the public good. The
               | doctrine does this by establishing that the tool can be
               | used for a myriad of ways that in no way directly compete
               | with the original work in the marketplace. For example,
               | when Stable Diffusion is being used for inpainting to
               | remove a stranger from the background of a wedding
               | photograph it is clear that Sarah Anderson is not being
               | impacted in any way shape or form despite the fact that
               | her works were temporarily copied as part of the data
               | ingestion process.
               | 
               | This doctrine gives the lower courts a clear test that
               | doesn't require diving deep into arguments about what it
               | means for "computers to learn ideas" or for "language
               | models to author".
               | 
               | The problem with an opt-in model is that it puts enough
               | of a burden on a language model that it discourages non-
               | infringing commercial creation. The opt-out-on-request
               | model, which would need some form of legislation and
               | probably informed by both the DMCA and right-to-be-
               | forgotten in Euro area, would be preferable as it would
               | have much less of a burden on model creators.
        
               | jacquesm wrote:
               | Well, I didn't say that opt-out was the way to go, but
               | that it would be better than nothing, which is what they
               | have right now.
               | 
               | You seem to be making a very convoluted argument that
               | eventually boils down to 'because it is useful it must be
               | right', aka an argument from utility. But copyright law
               | has time and again been proven to be highly resilient
               | against such arguments. You either have rights or you
               | don't and in a moment of clairvoyance the people that
               | came up with the current incarnation decided that it is
               | such an important thing that it gets bestowed _upon
               | creation_. No registration required (though it can help).
               | Just making something and _boom_ you have a bunch of
               | rights _which you can only contract out of_.
               | 
               | I don't think any utilitarian argument that results in
               | the creation of new works based on the works of others
               | will make those rights go away.
               | 
               | I've read your other comments and I see that this tool is
               | useful to you but don't be persuaded so easily by the
               | utility: If I stole your work and passed it off as my own
               | it might be very useful to society, especially if I re-
               | licensed it under more permissive terms or even placed in
               | the public domain. But I would be clearly infringing on
               | your rights. You may in fact not be in a position to
               | claim these works as your creation.
               | 
               | The fact that 'my' work has a few hundred or even a few
               | thousand such inputs rather than just one does not change
               | the principle: I did not create the work, and _that_ is
               | the bit that really matters, unless you are creating a
               | work it doesn 't matter if you have the equivalent of a
               | bitcoin tumbler for art at your disposal to pretend that
               | you have created a work. You did not. The fact that you
               | used a tool that obfuscates attribution and overrules the
               | licensing terms of the original copyright holders does
               | not mean that you can claim your hands are clean: you
               | know exactly what is going on behind the scenes.
               | 
               | Personally I won't go within a mile of these tools to
               | create work that I put my name under .
        
               | williamcotton wrote:
               | I'm willing to play this silly game for one reason: it's
               | absurd and I want you to look silly because you've now
               | insulted my artistic practice as unoriginal so the gloves
               | are off.
               | 
               | So I've used Stable Diffusion and I'm "literally"
               | stealing from every artist. Prove it. Get a warrant to
               | search my premises for signs of illegal language model
               | use. How do you get a warrant with an image that has no
               | visual evidence of being a copy?
        
               | jacquesm wrote:
               | It's not silly. If you disconnect from the internet and
               | you can no longer produce the same level of art that's
               | all the proof that's required. Clearly you are sourcing
               | it from somewhere, not your own brain.
               | 
               | I can prove to you that my writing and my code are mine
               | because you can stand behind me and look over my shoulder
               | to see that I am creating it, one bit (or at most 8) at
               | the time. Visual evidence of it being a copy is not
               | required: what's required is a track of creation aka
               | provenance. This is a very well defined area in
               | copyright. So the proof would be trivial: you recreate
               | your work again, without access to Stable Diffusion or
               | equivalent, while being monitored and if you can not then
               | that would count as a bust in my book.
               | 
               | Without provenance you are still creating art, but you
               | are not creating original art.
        
         | pessimizer wrote:
         | > I don't understand how using an image as input to a model is
         | a copyright infringement.
         | 
         | Look at the extreme case, then. What if that one image is your
         | _only_ input, and your output is identical to it? What if your
         | output is your input reflected over the x-axis? What if your
         | output just crops the input? What if your output is your input
         | cut into irregular pieces and randomly rearranged? Which
         | outputs violate copyright?
         | 
         | Slightly less extreme: suppose your input is two images, and
         | your output is those two images next to each other in a single
         | image? Or your output is the second image, reduced in size and
         | placed in the center of the first? What if both of the inputs
         | are human figures, and your output is to cut out the face and
         | hands of one image and put it onto the other?
         | 
         | > images that are derivative of that original image, the same
         | way a counterfeiter would make a copy of it.
         | 
         | Only one of these outputs are anything a counterfeiter would
         | do. Are any of the others copyright-violating?
        
           | threeseed wrote:
           | If the input argument were true then what about apps like
           | Adobe Lightroom.
           | 
           | Would they be able to use your photos for Adobe Stock without
           | permission ?
        
         | lofaszvanitt wrote:
         | Think of it as this way:
         | 
         | in order to create 5 very different illustrations you need to
         | talk with 5 people. in the end 5 people will get money when
         | they finish with their work.
         | 
         | an AI consumes these artists past output and instead of paying
         | to these artists it will gather income to the owner. So by
         | using the output of 5 people who have spent decades on
         | perfecting their craft, the AI generates income by stealing
         | their work, and the money flows to the owner only, who doesn't
         | give back anything to these people.
         | 
         | so in essence AI in this form kills income stream for humans,
         | since it gives back nothing.
        
           | theRealMe wrote:
           | Throughout history almost all skills have been learned/copied
           | from other people. Especially things like art are learned by
           | studying previous work.
           | 
           | What specifically is the defining reason that people can
           | learn by copying other peoples styles but ai cannot?
           | 
           | Are we supposed to halt technological progress to avoid
           | antiquated job destruction?
        
             | lofaszvanitt wrote:
             | For a human it takes time, it takes effort, in order to
             | copy/learn someone's style. And the outcome is not
             | guaranteed to be a success. There is no problem with AI,
             | but if you generate an image with words, something like:
             | make a painting in the style of "X artist", then maybe
             | compensate people for their work and everybody else
             | included, whose expertise was used during the creation of
             | given art.
             | 
             | So you want an image? For 5 bucks? You get an image that's
             | worth 5 bucks, but not an image that costs 1000 dollars to
             | make in real life.
             | 
             | The problem here is you giving a simpleminded person access
             | to an AI, and for a few bucks, this person can generate
             | something that uses thousands of man years of expertise for
             | that given work.
             | 
             | I hope you see the potential slippery slope here.
        
         | majormajor wrote:
         | > It seems that violation would only come if you would use the
         | model to produce images that are derivative of that original
         | image, the same way a counterfeiter would make a copy of it.
         | Have the skill to copy is not the same as actually copying.
         | 
         | With this sort of model's "creation" process, is something
         | close to everything it generates derivative of everything it
         | ingested, since had you ingested a different set of images
         | you'd presumably have a different model with different weights?
         | 
         | That's _kinda sorta_ analogous to human creation, but a human
         | can much more actively choose what to think about, what to
         | ignore, what to filter out.
         | 
         | The human process involves an explicit creative judgement step
         | that I don't think the image-generation-by-model process can -
         | and that creative transformation is key, legally, to a
         | derivative work being able to itself be copyrightable and to
         | not be infringing.
        
       ___________________________________________________________________
       (page generated 2023-01-26 23:01 UTC)