[HN Gopher] An unwilling illustrator found herself turned into a...
       ___________________________________________________________________
        
       An unwilling illustrator found herself turned into an AI model
        
       Author : ghuntley
       Score  : 239 points
       Date   : 2022-11-01 15:57 UTC (7 hours ago)
        
 (HTM) web link (waxy.org)
 (TXT) w3m dump (waxy.org)
        
       | qull wrote:
       | So she is claiming the flat corprate style, but with more
       | saturation, as her own? Thats ballsy. It's a style for sure, but
       | one based on looking oversimplified and generic imo. No suprise
       | its easy to emulate, and a common averaging result. Its not as
       | likely an actually distinct style, unless massively popular,
       | would have this issue. That said, the greater question is no less
       | valid; what elemens of style do we own, and how will ownership
       | manifest moving forward when many new works use ai images as a
       | starting poit, which are then based on other artists work, some
       | of them being ai generated. Will it become a feedback loop
       | sooner, or later, and what will that look like?
        
         | burkaman wrote:
         | DALL-E and Stable Diffusion are very good at replicating the
         | distinctive styles of famous artists like Van Gogh or Seurat or
         | anyone else you can think of, as well as other living artists:
         | https://www.technologyreview.com/2022/09/16/1059598/this-art...
         | 
         | This is not just about styles that you personally don't like.
         | It will work for any style with a lot of available existing
         | work, popularity is irrelevant.
        
         | [deleted]
        
         | jalk wrote:
         | That's was not my takeaway:
         | 
         | >... the rendering, brushstrokes, and colors are the most
         | surface-level area of art. I think what people will ultimately
         | connect to in art is a lovable, relatable character. And I'm
         | seeing AI struggling with that."
         | 
         | So she doesn't want her name associated with, in her eyes,
         | bland illustration that copy the the style she has used for
         | some big well known clients.
        
         | simonw wrote:
         | Go take a look at her portfolio: https://holliemengert.com/
         | 
         | It's clear to me that she has a very distinctive style of her
         | own. It's not "flat corporate style" at all.
        
         | ROTMetro wrote:
         | Homeboy in the article specifically trains AI on her work to
         | get specific results that return when her name is used because
         | he wants to be able to recreate her style.
         | 
         | Fans of her work alarmed by the closeness point out to her
         | what's happened.
         | 
         | Qull as appointed gatekeeper of style for the internet: 'she
         | has no discernable style'.
        
       | yieldcrv wrote:
       | Reading all of that, the biggest issue was just the art style
       | naming.
       | 
       | One of the key features of Stable Diffusion is adding "in the
       | style of <artist name>" to the prompt. This just has a
       | contemporary/living artist and actually lets an individual train
       | Stable Diffusion for anyone to add that style to their own Stable
       | Diffusion instance, instead of waiting for Stability.ai to
       | release another dataset.
       | 
       | He has since renamed the style, but he should just say "inspired
       | by <artist name>"
       | 
       | It is so similar to what someone inspired by a particular artist
       | would do that I can't make a separate standard.
       | 
       | Basically the pushback comes from the level of discipline that
       | was once required in the past (2 months ago) compared to now.
       | That level of discipline is no longer required.
        
       | cmovq wrote:
       | I know this website is not a hivemind, but it's interesting every
       | time an article like this gets posted the majority opinion seems
       | to be that training diffusion models on copyrighted work is
       | totally fine. In contrast when talking about training code
       | generation models there are multiple comments mentioning this is
       | not ok if licenses weren't respected.
       | 
       | For anyone who holds both of these opinions, why do you think
       | it's ok to train diffusion models on copyrighted work, but not
       | co-pilot on GPL code?
        
         | bombcar wrote:
         | If I were to steel man both sides it'd be something like this:
         | 
         | 1. Training an AI model on code (so far) makes it regurgitate
         | code line-for-line (with comments!). This is like "learning to
         | code" by just cut and pasting working code from other
         | codebases, you have to follow the license. The AI doesn't
         | "understand the algorithm" at all (or it hasn't been told
         | "don't export the input you fool"). Obviously a bog-simple AI
         | could make all licenses moot by dumping out what it input, and
         | the courts wouldn't permit that.
         | 
         | 2. Training an AI model on illustrations so far produces "style
         | parodies" which may look similar to an untrained eye (the
         | artist here is annoyed because she'd not art like that, even
         | though to us it looks similar enough). Drawing a picture that
         | _looks_ like Mickey Mouse is a _trademark_ violation, but
         | _tracing_ a picture of the Mouse is _both_ a trademark and a
         | copyright violation.
         | 
         | The first violates some pretty clear _legal_ concepts; the
         | second is closer to violating _moral_ concepts but those are
         | more flexible - if an artist spends years learning to paint in
         | the style of Michelangelo is that immoral?
        
           | yenwodyah wrote:
           | AI image generators also often churn out near-exact replicas
           | of their inputs. For example:
           | 
           | Original: https://static-cdn.jtvnw.net/ttv-
           | boxart/460636_IGDB-272x380....
           | 
           | Copies: https://lexica.art/?q=bloodborne
        
             | Abroszka wrote:
             | Exact replicas are an issue. If you are using AI image
             | generation to replicate the near exact image, then that's
             | illegal. But nobody cares if you copy a nice code pattern
             | from a GPL code and apply it to your own code base. In the
             | same fashion nobody should care if you make an image in the
             | same art style.
        
           | lolinder wrote:
           | The problem with this argument is that it's founded in how
           | the AI is used, not how it is made. It's not a compelling
           | reason to ban the tool, it's a compelling reason to regulate
           | its use.
           | 
           | Copilot _can_ produce code verbatim, but it doesn 't unless
           | you specifically set up a situation to test it. It requires
           | things like "include the exact text of a comment that exists
           | in training data" or "prefix your C functions the same way as
           | the training data does".
           | 
           | In everyday use, my experience has been that Copilot draws
           | extensively from files that I've opened in my codebase. If I
           | give Copilot a function body to fill in in a class I've
           | already written, it will use my internal APIs (which aren't
           | even hosted on GitHub) correctly as long as there are 1-2
           | examples in the file and I'm using a consistent naming
           | convention. This isn't copypasta, it really does have a clear
           | understanding of the semantics of my code.
           | 
           | This is why I'm not in favor of penalizing Microsoft and
           | GitHub for creating Copilot. I think there needs to be some
           | regulation on how it is used to make sure that people aren't
           | treating it as a repository of copypasta, but the AI itself
           | is pretty clearly capable of producing non-infringing work,
           | and indeed that seems to be the norm.
        
             | ceres wrote:
             | Please let's not start dictating how people should use a
             | piece of software. It would be like "regulating" Microsoft
             | Word just because people might use it to duplicate
             | copyrighted works.
        
               | lolinder wrote:
               | I'm not saying we should regulate the software, I'm
               | saying we need some rigorous method of ensuring that
               | using the AI tools doesn't put you in jeopardy of
               | accidental copyright infringement.
               | 
               | We most likely don't need new laws, because infringement
               | is infringement and how you made the infringing work is
               | irrelevant. Accidental infringement is already illegal in
               | the US.
        
             | irrational wrote:
             | How would you regulate this?
        
               | lairv wrote:
               | There is a part of Deep Learning research (Differential
               | Privacy) which focuses on making sure an algorithm cannot
               | leak information about the training set, and this is a
               | rigorous concept, you can quantify how much privacy-
               | preserving a model is, and there are methods to make a
               | model "private" (at the cost of performance I think for
               | now)
        
               | f_devd wrote:
               | Differential Privacy only proves that it cannot leak a
               | certain amount of information about individual samples of
               | the training set. This only guarantees the input is not
               | leaked exactly back, any composition of the training set
               | is valid, although in image generation this usually means
               | a very distorted image.
               | 
               | An example of DP in image generation (using GANs):
               | https://par.nsf.gov/servlets/purl/10283631
        
             | andrewxdiamond wrote:
             | > Copilot can produce code verbatim, but it doesn't unless
             | you specifically set up a situation to test it.
             | 
             | It does not matter what a service can or cannot do. We do
             | not regulate based on ability, but on action.
             | 
             | The service has an obligation to the license holders of the
             | training data to not violate the license. The mechanism for
             | which the license is violated is irrelevant. The only thing
             | that matters is the code ended up somewhere it shouldn't,
             | and the service is the actor in the chain of responsibility
             | that dropped the ball.
             | 
             | The prompting of the service is irrelevant. If I ask you to
             | reproduce a block of GPL code in my codebase and you do it,
             | you violated the license. It does not matter that I primed
             | you or lead you to that outcome. What matters is the
             | legally protected code is somewhere it shouldn't be.
        
               | lolinder wrote:
               | > If I ask you to reproduce a block of GPL code in my
               | codebase and you do it, you violated the license. It does
               | not matter that I primed you or lead you to that outcome.
               | What matters is the legally protected code is somewhere
               | it shouldn't be.
               | 
               | This isn't accurate. If I reproduce GPL code in your
               | codebase, that's perfectly acceptable as long as _you_
               | obey the terms of the GPL when you go to _distribute_
               | your code. In this hypothetical, my act of copying isn 't
               | restricted under the GPL license, it's _your_ subsequent
               | act of distribution that triggers the viral terms of the
               | GPL.
               | 
               | The big question that is still untested in court is
               | whether Copilot _itself_ constitutes a derivative work of
               | its training data. If Copilot is derivative then
               | Microsoft is infringing already. If Copilot is
               | transformative then it is the responsibility of
               | downstream consumers to ensure that they comply with the
               | license of any code that may get reproduced verbatim.
               | This question has not been ruled on, and it 's not clear
               | which direction a court will go.
        
           | ChrisMarshallNY wrote:
           | I'm pretty sure that there's a considerable amount of art
           | hanging in museums, that was done by students of great
           | artists. I think there are several Mona Lisas, done by da
           | Vinci's students, and they are almost identical to the
           | original.
        
           | PuddleCheese wrote:
           | Re. Point 2:
           | 
           | Artists are granted copyright for their work by default per
           | the Berne Convention. These copyrighted works are then used
           | without consent of the original author for these models.
           | 
           | Additionally, the argument that you can't copyright a style
           | is playing fast and loose with most things that are
           | proprietary, semantically.
        
             | UncleEntity wrote:
             | > Additionally, the argument that you can't copyright a
             | style is playing fast and loose with most things that are
             | proprietary, semantically.
             | 
             | This has been true since copyright existed, Braque couldn't
             | copyright cubism -- Picasso saw what he was doing and
             | basically copied the style with nothing to be done aside
             | from not letting him into the studio.
        
             | AlgorithmicTime wrote:
             | But if I train my own neural network inside my skull using
             | some artist's style, that's ok?
             | 
             | Either a style is copyrightable or it's not. If it's not,
             | then I can't see any argument that you can't use it
             | yourself or by proxy.
        
               | PuddleCheese wrote:
               | The brain-computer metaphor is not a very good one, it's
               | a pretty baseless appeal. Additionally, it's an argument
               | that anthropomorphizes something which has no moral,
               | legal, or ethical discretion.
               | 
               | You do not actively train your brain in remotely similar
               | methods, and you, as an individual, are accountable to
               | social pressures. An issue these companies are trying to
               | avoid with ethically questionable scraping/training
               | methods and research loop holes.
               | 
               | Additionally, many artists aren't purely learning from
               | others to perfectly emulate them, and it's quickly
               | spotted if they are, generally. Lessons learned do not
               | implicitly mean you perfectly emulate that lesson. At
               | each stage of learning, you bias things through your own
               | filter.
               | 
               | Overall, the idea that these two things are comparable
               | feels grotesque and reductionist, and feel quite similar
               | to the "Well I wasn't going to buy it anyway" arguments
               | we've been throwing around for decades to try to justify
               | piracy of other materials.
               | 
               | At the end of the day, an argument that "style can't be
               | copyrighted" is ignoring a lot of aspects of it's
               | definition, including the means, and can be extrapolated
               | into an argument that nothing proprietary should be
               | allowed to exist...
        
               | akiselev wrote:
               | _> Overall, the idea that these two things are comparable
               | feels grotesque and reductionist_
               | 
               | I agree with you there but the alternative - that they're
               | not comparable - I find equally grotesque and full of
               | convenient suppositions rooted in romanticism of "the
               | artist". We're in uncharted territory with AI finally
               | lapping at the heels of creative professionals and any
               | analogy is going to fall apart.
               | 
               | This feels like something that we should leave to the
               | courts on a case by case basis until there's enough
               | precedent for a legal test. The question at the end of
               | the day should be about harm and whether an AI algorithm
               | was used as run-around of a specific person's copyright
        
               | PuddleCheese wrote:
               | Good points.
               | 
               | I was actually just sitting in a AI Town Hall hosted by
               | the Concept Art Association which had 2 US Copyright
               | Lawyers who work at the USCO present, and their along
               | similar lines, currently.
               | 
               | Basically, like you specified, legal precedent needs to
               | be built up on a case by case basis, and harm can pretty
               | readily be demonstrated, at least anecdotally, especially
               | as copies are made during training of copyrighted work.
               | 
               | Unfortunately, historically, artists do not generally
               | enjoy the same legal representation or resources that
               | unionized industries with deeper pockets enjoy. It's
               | probably one of the reasons Stability.Ai are being so
               | considerate with their musical variant.
               | 
               | It would have been great if artists were asked before any
               | of this. I could see this going in such a different
               | direction if people were merely asked...
        
               | aqsalose wrote:
               | > But if I train my own neural network inside my skull
               | using some artist's style, that's ok?
               | 
               | How well the network inside your skull can manipulate
               | your limbs to reproduce good-quality work in some
               | artist's style?
               | 
               | Our current framework for thinking about "fair use",
               | "copyright", "trademark" and similar were thought about
               | into existence during an era when the options for
               | "network inside the skull" were to laboriously learn a
               | skill to draw or learn how to use a machine like printing
               | press/photocopier that produces exact copies.
               | 
               | Availability of a machine that automates previously hand-
               | made things much more cheaply or is much more powerful
               | often requires rethinking those concepts.
               | 
               | If I copy a book putting ink on paper letter by letter
               | manually, that's ok, think of those monks in monasteries
               | who do that all the time. And Mr Gutenberg's machine just
               | makes that ink-on-paper process more efficient...
        
               | odessacubbage wrote:
               | unless you are in fact a living and breathing cyborg [in
               | which case, congratulations] , the wet work inside your
               | head is not analogous to the neural networks that are
               | producing these images in any but the most loosely poetic
               | sense.
        
               | CuriouslyC wrote:
               | You are romanticizing brains. Please stick to logical
               | arguments that can be empirically tested.
        
               | idiotsecant wrote:
               | No? The mechanisms are different but the underlying idea
               | is the same - identify important features and replicate
               | those features in new context. If an AI identifies those
               | features quickly or if I identify them over a lifetime
               | what's the difference? If I so that you might say my work
               | is derivative but you won't due me. Why is it different
               | if an AI does it?
        
               | DenisM wrote:
               | This comment answers your questions:
               | 
               | https://news.ycombinator.com/item?id=33425414
        
               | idiotsecant wrote:
               | Not particularly. Parent post is not concerned with or
               | making any claims to special knowledge of the internal
               | details of the modelling in the mind or in the machine,
               | only the output.
        
               | peoplefromibiza wrote:
               | > The mechanisms are different but the underlying idea is
               | the same
               | 
               | no.
               | 
               | they are the same as asking a person to say a number
               | between 1 and 6, then asking the same question to a dice
               | and concluding that men and dice work the same.
               | 
               | > identify important features and replicate those
               | features in new context
               | 
               | untrue
               | 
               | if you think that that's what people do, obviously you
               | can conclude that AI and humans are similar.
               | 
               | But people don't identify features, people first of all
               | learn how to replicate - _mechanically_ - the strokes,
               | using the same tools as the original artists until they
               | are able to do it, most of the time people fail and
               | reiterate the process until they find something they are
               | actually very good at and _only after that_ the good ones
               | develop their own style.
               | 
               | based either on some artistic style or some artistic
               | meaning.
               | 
               | But the first difference we learn here is that humans can
               | fail to replicate something and still become renown
               | artists.
               | 
               | An AI cannot do that.
               | 
               | Not on its own.
               | 
               | For example, many probably already know, but Michelangelo
               | was a sculptor.
               | 
               | He was proficient as a painter too, but painting wasn't
               | his strongest skill.
               | 
               | So artists, first of all, are creators, not mere
               | replicators, in many different forms, they are not good
               | at everything in the same way, but their knowledge
               | percolates in other fields related to theirs: if you need
               | to make preparatory drawings for a sculpture, you need to
               | be good at drawing and probably painting (lights,
               | shadows, mood, expressions, are all fundamental for a
               | good sculpture)
               | 
               | Secondly, the features artists derive from other art
               | pieces are not the technical ones, those needed to make
               | an exact replica of the original, but those that make it
               | special.
               | 
               | For example, in the case of Michelangelo, the Pieta has
               | some features that an AI would surely miss.
               | 
               | First of all the way he shaped the marble that was
               | unheard of, it doesn't mean much if you don't
               | contextualize the opera and immerse it in the historical
               | period it was created.
               | 
               | An AI could think that Michelangelo and Canova were
               | contemporary, while they were separated by 3 centuries,
               | which make a lot of difference in practice and in spirit.
               | 
               | But more importantly, Michelangelo's Pieta is out of
               | proportion, he could not make the two figures in the
               | correct scale, proving that even a genius like he was
               | could not easily create a faithful reproduction of two
               | adults one in the lap of the other, with the tools of the
               | 16th century.
               | 
               | The Virgin Mary is very, very young, which was at odds
               | with her role as a grieving mother and, the most
               | important of them all, the Christ figure is not
               | suffering, because Michelangelo did not want to depict
               | death.
               | 
               | An AI would assume that those are all features of
               | Michelangelo's way of sculpting, but in reality it's the
               | result of a mix of complexity of the opera, time when it
               | was created, quality and technology of the tools used and
               | the artist intentions, which makes the opera unique and,
               | ultimately, irreproducible.
               | 
               | If you use an AI to reproduce Michelangelo, everybody
               | would notice, because it's literally something a complete
               | noob or someone with a very bad taste would do.
               | 
               | So to not say the difference, you should copy the works
               | of lesser known artists, making it even more unethical.
        
               | idiotsecant wrote:
               | respectfully, you're raising a whole lot of arguments
               | here that had nothing to do with any point I was raising
               | and doesn't seem to be moving this discussion forward in
               | any significant way. The point of this subthread thread
               | was a user saying the following:
               | 
               | >But if I train my own neural network inside my skull
               | using some artist's style, that's ok?
               | 
               | This post and others uses a lot of flowery language to
               | point out that we train artificial neural networks and
               | real neural networks in different ways. OK, great. I
               | don't think anyone is saying that's not true. What I _am_
               | saying is that it 's irrelevant.
               | 
               | If I am an exceptional imitator of the style of Jackson
               | Pollock and i make a bunch of paintings that are very
               | much in that style but clearly not his work I'm not going
               | to be sued. My work will be labeled, rightfully so, as
               | derivative but I have the right to sell it because it's
               | not the same thing. Is that somehow more acceptable
               | because I can only do it slowly and at a low volume? What
               | if I start an institute whose sole purpose is training
               | others to make Jackson Pollock-like paintings? What if I
               | skip the people and make a machine that makes a similar
               | quality of paintings with a similarly derivative style?
               | Is that somehow immoral / illegal? Why?
               | 
               | There's a whole lot of hand-wavey logic going on in this
               | thread about context and opera and special human magic
               | that only humans can possibly do and that somehow makes
               | it immoral for an AI to do it. I am yet to see a simple,
               | succinct argument of why that is the case.
        
               | peoplefromibiza wrote:
               | > This post and others uses a lot of flowery language to
               | point out that we train artificial neural networks and
               | real neural networks in different ways. OK, great. I
               | don't think anyone is saying that's not true. What I am
               | saying is that it's irrelevant
               | 
               | Maybe I was too aulic.
               | 
               | The point is: you don't train "your artificial
               | intelligence", because you're not an artificial
               | intelligence, you train your whole self, that is a
               | system, a very complex system.
               | 
               | So you can think in terms of "I don't like death, I don't
               | want to display death"
               | 
               | You can learn how to paint using your feet, if you have
               | no hands.
               | 
               | You can be blind and still paint and enjoy it!
               | 
               | An AI cannot think of "not displaying death" in someone's
               | face, not even if you command it to do it, because it
               | doesn't mean anything, out of context.
               | 
               | > Jackson Pollock
               | 
               | Jackson Pollock is the classic example to explain the
               | concept: of course you can make the same paintings
               | Jackson Pollock made.
               | 
               | But you'll never be Jackson Pollock, because that trick
               | works only the first time, if you are a pioneer.
               | 
               | If you create something that look like Pollock, everybody
               | will tell you "oh... it reminds of Jackson Pollock..."
               | and no one will say "HOW ORIGINAL!"
               | 
               | Like no one can ever be Armstrong again, land on the Moon
               | and say "A small step for man (etc etc)"
               | 
               | Pollock happened, you can of course copy Pollock, but
               | nobody copies Pollock not because it's hard, but because
               | it's cheap AF
               | 
               | So it's the premise that is wrong: you are not training,
               | you are learning.
               | 
               | They are very different concepts.
               | 
               | AIs (if we wanna define the "intelligent") are currently
               | just very complex copy machines trained on copyrighted
               | material.
               | 
               | Remove the copyrighted material and their output would be
               | much less than unimpressive (probably a mix of very
               | boring and very ugly).
               | 
               | Remove the ability to watch copyrighted material from
               | people and some of them will come up with an original
               | piece of art.
               | 
               | It happened many times throughout history.
        
               | idiotsecant wrote:
               | You're typing a lot in these posts but literally every
               | point you're making here is orthogonal to the actual
               | discussion, which is why utilizing the end product of
               | exposing an AI to copyrighted material and exposing a
               | human to copyrighted material are morally distinct.
        
           | comfysocks wrote:
           | But haven't we seen examples of generative art that are
           | substantially similar to original artwork and examples where
           | AI regurgitates blocks of art (with watermarks!?)
        
           | quickcheque wrote:
           | If you are willing to throw out the moral reason, then the
           | legal reason is just an empty rule.
        
             | bombcar wrote:
             | There are many legal reasons without moral force behind
             | them beyond "we need to agree on one way or the other" -
             | such as which side of the road to drive on.
        
               | quickcheque wrote:
               | In your example, both sides are equally acceptable and we
               | just pick one. How does this apply to the present case?
        
               | bombcar wrote:
               | We made a decision years ago around copyright (we've
               | modified it since but the general concept is "promote the
               | arts by letting artists have reproduction rights for a
               | time"). We could change that in various ways, if we
               | wanted to, even removing copyright entirely for "machine-
               | readable computer code" and leave protections to trade
               | secrets. Even if you argue "no copyright at all is
               | immoral" or "infinite copyright is immoral" it's hard to
               | argue that "exactly author's life + 50 years is the only
               | moral option".
               | 
               | Switching the rules on people _during the game_ is what
               | annoys /angers people, and is basically what these AIs
               | have done (because they've introduced a new player at low
               | effort).
        
           | dwringer wrote:
           | > if an artist spends years learning to paint in the style of
           | Michelangelo is that immoral?
           | 
           | I'd say that artist has gained a lot by studying
           | Michaelangelo, including an appreciation for what
           | Michaelangelo himself accomplished and insights into how to
           | paint as well or better, and maybe even how to teach that to
           | other people. I don't think we get those benefits from AI
           | models doing that (at least not yet!)
        
             | _manifold wrote:
             | I think we're kidding ourselves to think that some nebulous
             | concept of "the artist's journey" somehow informs the end
             | result in a way that is self-evident in human-produced
             | digital art. Just as with electric signals in the "brain in
             | a vat" thought experiment, with digital art it's pixels. If
             | an algorithm can produce a set of pixels that is just as
             | subjectively good as a human artist, then nobody will be
             | able to tell - and most likely the average person just
             | won't care.
             | 
             | On the other hand, I would say that traditional mediums
             | (especially large format paintings) are relatively safe
             | from AI generation/automation - for now.
        
               | [deleted]
        
               | akiselev wrote:
               | _> On the other hand, I would say that traditional
               | mediums (especially large format paintings) are
               | relatively safe from AI generation /automation - for
               | now._
               | 
               | Why do you think that? I think large format paintings
               | might be in just as much danger.
               | 
               | There's a large industry of talented artists in China,
               | Vietnam, etc who copy famous artworks by hand for very
               | low prices. They're easily accessible online: you upload
               | an image and provide some stylistic details and the
               | artist does the hard work of turning the image into brush
               | strokes. It's not "automated" but I've already ordered
               | one 4'x2' AI generated painting in acrylic relief for
               | less than the cost of a 1'x1' from a local community
               | gallery. I put in quite a bit of work inpainting the
               | image to get what I want but it would have been
               | completely impossible to get what I want even six months
               | ago.
               | 
               | I've only ever purchased half a dozen artworks in my life
               | and they were all under a few hundred bucks but with this
               | new tech, it just doesn't make sense to buy an artists'
               | original work unless it's for charity. The AI can do the
               | creative work the way I want and there are plenty of
               | artists who are excellent at the mechanical translation
               | (which still requires a lot of creativity, mind)
        
               | dwringer wrote:
               | I don't think the artist's journey necessarily informs
               | the end result in some way - but I believe it can be an
               | important experience for the artist. Then again, artists
               | can still do this in the era of generative art - there's
               | just not much as much chance of being rewarded for it. If
               | this leads to fewer people wanting to explore art, then I
               | think we've lost something. But it's not clear to me
               | where things are headed I guess. This could be a huge
               | boon in letting people explore ways of expressing
               | themselves who otherwise lacked the artistic ability to
               | want to try.
        
           | shadowgovt wrote:
           | And perhaps more importantly regarding (1) than simple
           | regurgitation: code _does_ things. There 's a real risk that
           | if you just let Copilot emit output without understanding
           | what that output does, it'll do the wrong thing.
           | 
           | Art is in the eye of the beholder. If the output looks
           | correct as per what you're looking for, it _is_ correct.
           | There 's no additional layer of "Is it saying what I meant it
           | to say" that is relevant to anyone who isn't an art critic.
        
           | [deleted]
        
         | mensetmanusman wrote:
         | People often hold conflicting views, they done like to think
         | about it though, because it can lead to cognitive dissonance.
         | 
         | That's one reason why it is probably better to have a derived
         | world view than a contrived world view.
        
         | marcosdumay wrote:
         | > the majority opinion seems to be that training diffusion
         | models on copyrighted work is totally fine
         | 
         | Well, maybe they were all just downvoted into invisibility. But
         | up to your post, I have seen none.
        
         | an1sotropy wrote:
         | I think this is a great question, but I think answers should
         | rest on a slightly more detailed understanding of how actually
         | copyright works.
         | 
         | IANAL, but to a first-order approximation: _everything_ is
         | "copyrighted" [1]. The copyright is owned by someone/something.
         | The owner gets to set the terms of the licensing. The rare
         | things not under copyright may have been put explicitly into
         | the public domain (which actually takes some effort), or have
         | had their copyright expire (which takes quite a while; thanks
         | Disney).
         | 
         | So: this is really a question about fair use [2], and about
         | when the terms of licensing kick in, and it should be
         | understood and discussed as such. I don't think anyone who has
         | really thought about this is claiming that the models can't be
         | trained on (copyrighted) material; the consumption of the
         | material is not the problem, is it? The problem is that the
         | models: (1) sometimes recreate particular inputs or
         | identifiable parts of them (like the Getty watermark), or
         | recreate some essential characteristics of their inputs (like
         | possibly trademark-able stylistic elements), AND ALSO, (2) have
         | no way of attributing the output to the input.
         | 
         | Without being able to identify anything specific about the
         | input, it is impossible know with certainty that the output
         | falls within fair use (e.g. because it was sufficiently
         | transformative), and it is impossible to know how to implement
         | the terms of licensing for things that don't fall within fair
         | use. There's just no getting around that with the current crop
         | of models.
         | 
         | The legal minefield is not from (1) or (2), but from (1)+(2),
         | at the moment of redistribution, monetized or not. Even if
         | Copilot was only trained on non-reciprocal licenses (BSD, MIT),
         | _there are very likely still licensing terms of use_ , which
         | may include identifying the original copyright owner.
         | Reciprocal licenses like GPL have more involved licensing
         | terms, _but that is not the problem_ : the problem is failure
         | to identify the original licensing terms. We should not use
         | these models as an opportunity to make an issue about GPL or
         | its authors, or about the business model of companies like
         | Getty; both rest on copyright, and come to our attention
         | because of licensing.
         | 
         | Sorry about the rant. As for your question: I think it may be
         | as simple as: to what extent are readers here the producers of
         | inputs to the ML models, versus consumers of outputs. It gets
         | personal for coders when models violate licensing terms of FOSS
         | code, but it feels fun/empowering to wield the models to make
         | images that we'd otherwise be unable to access. From my rant
         | above you can tell that whether its for code or images, I think
         | the whole thing is an IP disaster.
         | 
         | [1] https://en.wikipedia.org/wiki/Berne_Convention
         | 
         | [2] https://en.wikipedia.org/wiki/Fair_use
        
         | aliqot wrote:
         | Your argument is tantamount to expecting Michael Bay to have
         | never seen a Scorcese or deriving influence from it.
        
           | pvaldes wrote:
           | Is Michael Bay cloning and copyrighting entire scenes of the
           | Scorsese films as entirely yours?
        
           | pessimizer wrote:
           | Pretending it's not clearly more complicated than that will
           | not convince anyone, it will make them feel condescended to.
           | While Scorcese is Scorcese, a deep learning model is not
           | Michael Bay.
        
             | aliqot wrote:
             | I used that comparison on purpose. Michael Bay leans on a
             | lot of computer driven technology to shoot movies inspired
             | by other more traditional directors. The comparison is
             | direct, if you feel condescended to, I did not intend that.
        
         | com2kid wrote:
         | > In contrast when talking about training code generation
         | models there are multiple comments mentioning this is not ok if
         | licenses weren't respected.
         | 
         | I think one of the differences is that people are seeing non-
         | trivial amounts of copyrighted code being output by AI models.
         | 
         | If a 262,144 pixel image has a few 2x2 squares copied directly,
         | you can't tell.
         | 
         | If a 300 line source file has 20 lines copied directly from a
         | copyrighted source, well, that is more blatant.
        
           | andybak wrote:
           | I personally feel the bar for copyrighting code should be
           | considerably higher than 20 lines.
        
             | com2kid wrote:
             | > I personally feel the bar for copyrighting code should be
             | considerably higher than 20 lines.
             | 
             | That highly depends on the lines of code.
             | 
             | One of my (now abandoned) open source react components
             | essentially does some smarter-than-it-probably-should state
             | management in just a handful of LOCs. At least a few
             | hundred people found the clever solution I came up with
             | useful enough to integrate into their own projects.
             | 
             | I've seen talented a graphics programmer hand optimize
             | routines to gain significant speed boosts, speed boosts
             | that helped save non-trivial amounts of system resources.
             | 
             | And where do you draw the line? That same gfx programmer
             | optimized maybe a dozen functions, each less than 20 lines,
             | but all quite independent of each other. The sum total of
             | his work gave us a huge performance boost over everyone
             | else in the field at the time.
             | 
             | And of course you also have super terse languages like APL,
             | where non-trivial algorithms can easily be implemented in
             | 20 LOC.
             | 
             | But let's move to another medium, the written word, also
             | one of the less controversial aspects of copyright
             | (ignoring the USA's penchant for indefinite extension of
             | copyright)
             | 
             | Start with poems, plenty of artistically significant poems
             | that come in under 20 lines, deserving of copyright for
             | sure.
             | 
             | https://tinhouse.com/miracles-by-lucy-corin/
             | 
             | There is a short story, around 22 lines.
             | 
             | The problem is, it is complicated, which is why these are
             | the types of things that get litigated all the time.
             | 
             | Heck as a profession we cannot even agree on what a line of
             | code is. A LOC in Java is, IMHO, worse less than a LOC in
             | JavaScript, and if you jump to embedded C, wow that is
             | super terse, unless you count the thousands of lines of
             | #defines describing pinouts and such, but domain knowledge
             | is needed to know that those aren't "real" lines of code.
        
         | nico wrote:
         | Yup. There's also this, FTA:
         | 
         | > the original images themselves aren't stored in the Stable
         | Diffusion model, with over 100 terabytes of images used to
         | create a tiny 4 GB model
         | 
         | Is jpeg compression transformative then? Should a compressed
         | image of something not be copyrightable because "it doesn't
         | store" the "real image"? How about compressed video? Where do
         | we draw the line?
        
           | esperent wrote:
           | > Where do we draw the line?
           | 
           | This is what we have courts and legislation for. I expect
           | there's existing legislation here about what constitutes a
           | different work versus an exact copy but it may need some
           | updates for AI.
        
           | marmada wrote:
           | By that case all art should be copyrighted since our brain
           | stores a highly compressed version of everything we've seen.
           | 
           | I wager 100 TB => 4GB is different from JPEG compression and
           | more similar to what happens in our brains. "Neural
           | compression" so to speak
        
             | notamy wrote:
             | > By that case all art should be copyrighted since our
             | brain stores a highly compressed version of everything
             | we've seen.
             | 
             | Good thing this is already the case!
             | https://en.wikipedia.org/wiki/Berne_Convention
             | 
             | > The Berne Convention formally mandated several aspects of
             | modern copyright law; it introduced the concept that a
             | copyright exists the moment a work is "fixed", rather than
             | requiring registration. It also enforces a requirement that
             | countries recognize copyrights held by the citizens of all
             | other parties to the convention.
        
           | pulvinar wrote:
           | The difference is that JPEG _does_ store the real image, at
           | least close enough to within the given tolerance (determined
           | by the compression factor). That image is as real as say an
           | image on film (also not exact, nor in  "original" form).
           | 
           | With Stable Diffusion it's storing the style, but can't
           | reproduce any single input image-- there aren't enough bits
           | [0]. (except by luck, but that's really true for any
           | storage).
           | 
           | [0] https://en.wikipedia.org/wiki/Shannon-Hartley_theorem
        
             | mjburgess wrote:
             | The weights of a NN are just a compressed representation of
             | the training data, think lossy zip.
             | 
             | Rank all generated images by similarity to the training
             | data (etc.) and you can see what's stored.
             | 
             | The Shannon-Hartley theorem isnt relevant. A 4GB zip of
             | 100TB text data can exactly reproduce the initial 100TB for
             | some distributions of that initial dataset.
        
             | peoplefromibiza wrote:
             | > there aren't enough bits
             | 
             | you can create compressed copy of a file containing 100TB
             | of the letter "A" in much less than 4GB
             | 
             | there could be enough bits in there to reproduce some of
             | the inputs.
        
               | CuriouslyC wrote:
               | What this analogy is saying is that if an image is
               | generic and derivative enough (or massively
               | overrepresented in the training data) it may be possible
               | to reconstruct a very close approximation from the model.
               | If the training data is unbiased, I question the validity
               | of copyright claims on an image that is sufficiently
               | derivative that it can be reproduced in this manner.
        
           | friend_and_foe wrote:
           | If you can't reproduce a quantitatively (not qualitatively)
           | similar likeness of an image then it is not just a compressed
           | image.
        
         | narcraft wrote:
         | It's actually fine in both cases
        
           | [deleted]
        
         | [deleted]
        
         | [deleted]
        
         | mirekrusin wrote:
         | Can humans learn from copyrighted work?
         | 
         | ps. I'm surprised music has not yet been yet leveled by AI,
         | can't wait for "dimmu borgir christmas carols" prompts.
        
           | itronitron wrote:
           | I don't think the law will get hammered down until the AI
           | models generate 'major recording artist' inspired songs.
           | Anyone claiming that artists can't claim 'style' as a defense
           | of AI generated works is in for a rude awakening.
        
           | CuriouslyC wrote:
           | Music is somewhat more challenging because you have a few
           | other problems that have to be solved in the pipeline, and
           | source separation is still not a 100% solved problem. Beyond
           | that, audio tagging beyond track level artist/genre is a lot
           | harder than image tagging.
           | 
           | Once you have separated sources for a training data set, it's
           | like text generation, except that instead of a single
           | function of sequence position, you have multiple correlated
           | functions of time. Text generation models can barely maintain
           | self consistency from paragraph to paragraph, which is a
           | sequence difference of maybe 200 tokens, now consider moving
           | from token position to a time variable, and adding the
           | requirement that multiple sequences retain coherency both
           | with each other, and with themselves over much larger
           | distances.
           | 
           | There are generative music models, but it's mostly stuff
           | that's been trained on midi files for a specific genre or
           | artist, and the output isn't that impressive.
           | 
           | I am also eagerly awaiting "hark the bloodied angel screams"
           | with blastbeats, shrieks and blistering tremolo guitar,
           | though.
        
         | charcircuit wrote:
         | Code generation models tend to much more often regurgitate code
         | from the training data compared to one of these image based
         | models regurgitating images from the training data.
         | 
         | Code generation models need to have special handling for
         | checking if the generated code falls under copyright.
        
         | friend_and_foe wrote:
         | It's not the training of the models that's the problem, it's
         | when the AI spits out "substantial portions of code", an
         | important term in the GPL and with regard to fair use law, that
         | are exact, sometimes even including exact comments from
         | specific codebases. This does violate the licenses.
         | 
         | There's something quantitative in code that you don't get in
         | drawings, drawings the unique quality is purely qualitative, so
         | it is hard to demonstrate what exactly it was that was ripped
         | off. When you find your exact words being returned by a code
         | helper AI it's hard to pretend that it's not directly and
         | plainly just copy pasting code snippets.
        
           | oneoff786 wrote:
           | It's a dumb hill to die on. Doomed to fall to a layer of
           | minor refactoring. If you say that's the problem, you'll have
           | nothing to stand on later.
        
             | andybak wrote:
             | On the contrary - the part that is problemmatic is the
             | verbatim reproduction of copyrighted code. If that's fixed
             | by a "minor refactoring" then there's no hill to die on.
             | It's not AI code generation per se that's problemmatic -
             | it's when it does things that are break current IP law.
             | 
             | If you want to debate expanding IP law - that's a different
             | discussion and one I would be rather sceptical about. I'd
             | prefer the that IP law in general was rolled back - not
             | forward.
        
             | robocat wrote:
             | > Doomed to fall to a layer of minor refactoring
             | 
             | Not quite - there is a reason why
             | https://wikipedia.org/wiki/Clean_room_design exists as a
             | concept to workaround copyright, and the same concept could
             | hold for ML models.
        
         | kelseyfrog wrote:
         | The vast majority of HN's patronage is tech aligned. Try asking
         | a community of artists the same question and see what their
         | responses are. The results might surprise you.
        
           | dQw4w9WgXcQ wrote:
           | What's funny is HN was majority OK with AirBNB "disrupting"
           | hotels and Uber/Lyft "disrupting" taxi services by bending
           | the rules and exploiting legal loopholes, but when AI starts
           | "disrupting" their artwork and code by bending the rules
           | suddenly disruption becomes a personal problem.
           | 
           | Disrupt onward I say. Humans learn and remix from prior
           | copyrighted work all the time using their brains (consciously
           | chosen or not). So long as the new work is distinguishable
           | enough to be unique there's nothing wrong with these new AI
           | creations.
        
         | Waterluvian wrote:
         | Because when I go to a live performance, watch a movie, browse
         | an art gallery, etc., I am training my brain on copyrighted
         | work. Every artist has done the same. No artist has developed
         | their style in a vaccuum.
         | 
         | (See my other comment though, I am not sold on any of this
         | being right).
        
           | peoplefromibiza wrote:
           | > Because when I go to a live performance, watch a movie,
           | browse an art gallery, etc., I am training my brain on
           | copyrighted work
           | 
           | you paid for it and enjoyed it in the way the artist
           | intended.
           | 
           | > I am training my brain on copyrighted work.
           | 
           | too bad your brain alone is useless.
           | 
           | You need good hands to replicate much of the copyrighted
           | works you "trained" your brain on.
           | 
           | > No artist has developed their style in a vacuum.
           | 
           | some absolutely did, indeed.
           | 
           | Just look at the school of film and animation that artists in
           | USSR developed while separated from the rest of the World.
           | 
           | They are unique and completely different from what the west
           | was used to (Disney)
           | 
           | https://www.youtube.com/watch?v=2qWBZattl8s
           | 
           | https://www.youtube.com/watch?v=1qrWnS3ULPk
        
             | UncleEntity wrote:
             | > Just look at the school of film and animation that
             | artists in USSR developed while separated from the rest of
             | the World.
             | 
             | So they were developed in a complete vacuum including
             | traditional things like theater, poetry and story telling?
        
         | [deleted]
        
         | melagonster wrote:
         | maybe we can have a GPL code generator model, all generated
         | code are under GPL license?
        
         | joshcryer wrote:
         | I always held the opinion that GPL etc was a copy-left license
         | that was intended to make sure the code was free (free as in
         | freedom not as in beer). That in an ideal world you wouldn't
         | need the GPL or any licenses at all. At this point I really
         | don't care what co-pilot or any of its derivatives result in
         | and I think in the not too distant future we will have machine
         | code to readable code translation which will enable more
         | freedom. That is, it really won't matter if the code is
         | compiled or not, when you can "AI decompile" it into human
         | readable code, do your modifications, and then do with it what
         | you will.
         | 
         | From that view let the data be free.
        
           | pessimizer wrote:
           | As long as this copyright violation laundering isn't reserved
           | for the big guys, I'm happy for anything that confuses and
           | delegitimizes the concept of copyright. But it is reserved
           | for the big guys, you're going to get sued to death if you
           | copy any of their work.
        
           | ipaddr wrote:
           | Some languages need to compile but others don't
        
         | michaelmrose wrote:
         | I think both are inevitable and I'm ok with both. I think a
         | sticking point is that its considered normal to make your own
         | art in the style of another but abnormal to copy code verbatim.
         | Art seems to be clearly the former while there are instances
         | that probably stick in people's minds where copilot has
         | produced verbatim examples.
         | 
         | Indeed it seems like code will be vastly more prone to this
         | problem compared to art because changing a single pixel is
         | merely a question of aesthetics whereas code is constrained
         | tightly by the syntax of the language. With a much smaller
         | space of correct results duplication is likely inevitable.
        
         | lolinder wrote:
         | Possibly controversial opinion: I think the biggest reason why
         | so many people hold conflicting views on this is because of who
         | the victim is in each case.
         | 
         | The loudest voices complaining they were directly hurt by
         | Copilot's training are open source maintainers. These are
         | exactly the kind of people who we love to root for on here.
         | They're the little guy involved in a labor of love, giving away
         | their work for free (with terms).
         | 
         | On the other hand, the highest-profile victims of Stable
         | Diffusion and DALL-E are Getty Images and company. They're in
         | most respects the opposite of open source maintainers: big
         | companies worth millions of dollars for doing comparatively
         | little work (primarily distributing photos other people took).
         | 
         | Because in the case of images the victim is most prominently
         | faceless corporations, I think our collective bias towards
         | "information wants to be free" shows through more clearly when
         | regarding DALL-E than it does with Copilot.
        
         | benlivengood wrote:
         | Humans used to learn to code from copyrighted works (textbooks)
         | without much reference to OSS or Free Software. Similarly,
         | teaching ML models to code from copyrighted works isn't going
         | to violate copyright more frequently than a human might; and
         | detecting exact copies should be pretty easy by comparing with
         | the corpus used to train it. Software houses already have to
         | worry about infringement of snippets, and things like Codex are
         | just one more potential source.
        
           | ipaddr wrote:
           | Those books were purchased and a license granted for such
           | use.
        
         | fnordpiglet wrote:
         | Because there's more programmers than artists in this hive
         | mind.
        
         | firefoxkekw wrote:
         | Cognitive dissonance, different persons, hypocrisy.
        
         | Adverblessly wrote:
         | With the caveat of "Strong opinions, weakly held", my personal
         | take is that creating artificial scarcity is inherently immoral
         | and thus copyright itself is immoral. Training AI on someone's
         | non-private work is then completely fine IMO.
         | 
         | Copyleft is a license that weakens copyright (and thus
         | inherently good :)), so using machine learning to weaken
         | copyleft by allowing you to copyright "clones" of copyleft code
         | is bad.
         | 
         | If I try to generalize here, the problem in both cases is only
         | if you produce copyrighted works, especially if you trained on
         | copyleft works. If instead both models would stipulate that all
         | produced works are copyleft I would be much more fine with it
         | (and I feel it would respect the license of the copyleft works
         | it was trained on, even if that may be legally shaky).
        
           | egypturnash wrote:
           | How do you propose to keep artists able to pay their bills
           | and live a decent life if you're completely cool with
           | training AIs on them?
           | 
           | Bonus points if you have any actionable scheme beyond waving
           | your hands and talking vaguely about "basic income".
           | 
           | Keep in mind that the life of a professional artist is
           | currently very perilous, anyone working freelance is
           | constantly battling against the social media giants' desire
           | to keep everyone scrolling their site forever. Words like
           | "patreon" and "commission" and links off-site to places an
           | artist can exchange their works for money are poison to The
           | Algorithm and _will_ be hidden.
           | 
           | And also if I am reading this right, you have absolutely _no_
           | problem with an image generator that 's been trained on
           | copyrighted work producing work that's either copyrighted
           | _or_ copylefted? You are utterly fine with disregarding the
           | copyrights of the original artist and /or whoever they may
           | have assigned the copyright to as part of their contract?
        
         | pjonesdotca wrote:
         | Because the models are not creating a 1:1 replacement of the
         | original work.
         | 
         | As mentioned before "style" is not something subject to
         | copyright and the model creates a model of that style. The
         | process of finetuning a model generally means that one would
         | not want to recreate the original images as that would overfit
         | it and render it, essentially useless.
         | 
         | When it comes to code, there is a higher chance of getting a
         | one-to-one clone of the input as the options used in creating
         | an algorithm, or even a simple function are dramatically
         | reduced imo.
        
           | mensetmanusman wrote:
           | Code is hundreds to many thousands of lines. A line of code
           | is analogous to one color pixel in digital art.
        
             | melagonster wrote:
             | but one line similar code is easier to find. this is
             | because copilot work in one line/small function level.
        
             | com2kid wrote:
             | Depends on which lines of code.
             | 
             | I have written projects where I'd consider a handful of
             | lines of code to be the core central tenant of the entire
             | project that everything else is built up around. Copy those
             | lines and everything else is scaffolding that falls out
             | naturally from the development process.
        
           | pessimizer wrote:
           | > Because the models are not creating a 1:1 replacement of
           | the original work.
           | 
           | Since when did that become a requirement? If those are the
           | rules now, then cutting the final credits is good enough to
           | start torrenting movies.
           | 
           | > When it comes to code, there is a higher chance of getting
           | a one-to-one clone of the input as the options used in
           | creating an algorithm, or even a simple function are
           | dramatically reduced imo.
           | 
           | If you're going to consider each function within a larger
           | work as an individual work, that makes the 1:1 replacement
           | claim more dubious. In order to recognizably imitate a style,
           | one or more features of that style have to be recognizably
           | copied, although no single area of the illustration would
           | have to be. A function is a facet of a complete program just
           | like recognizable features of a style are facets of each work
           | an artist produces. If it helps, consider an artist's style
           | as their own personal utility library.
        
             | CuriouslyC wrote:
             | If I made a scene for scene remake of a Disney movie, with
             | an ugly woman for a princess and social
             | commentary/satirical injections, it would be defensible as
             | fair use in court.
        
           | leereeves wrote:
           | > when it comes to code, there is a higher chance of getting
           | a one-to-one clone of the input.
           | 
           | I'm not so sure. There's a generated image in the article
           | that I think looks enough like Wonder Woman to cause a
           | lawsuit.
           | 
           | That's just one of a handful of images in the article, and
           | doesn't seem to have been chosen for its similarity to Wonder
           | Woman.
        
           | dwringer wrote:
           | I think when it comes to art, less than one-to-one clones are
           | often still functionally equivalent in the mind of many
           | viewers. Stylistic and thematic content is often just as, if
           | not more, important than the exact composition. But currently
           | the law does agree that this is not copyrightable. And
           | sometimes independent artists profit and make a name for
           | themselves copping other styles, and I think that's great.
           | 
           | But could it be considered an intellectual and sociological
           | denial-of-service attack when it's scaled to the point where
           | a machine can crank out dozens of derivative works per
           | minute? I'm not sure this is a situation at all comparable to
           | human artists making derivative works. Those involve long
           | periods of concentration, focus, and reflection by a
           | conscious human agent to pull off, thus in some sense
           | furthering to the intellectual development of humanity and
           | fostering a deeper appreciation for the source work. The
           | machine does none of that; it's sort of just a photocopier
           | one step removed in hyperspace, copying some of the artists'
           | abstractions instead of their brush strokes.
        
       | yuzuquat wrote:
       | There's already a lot of discussion on the legal/moral arguments
       | here so I'd like to comment on something more concrete.
       | 
       | As I understand it, an illustration for a magazine like the New
       | York Times might net anywhere from $100 to $1000 and require 8
       | hours of work. An illustrator working for someone like the new
       | york times or magic the gathering would likely consider this the
       | pinnacle of a stable job. Many, including my comic books teacher,
       | spent years moonlighting a service job before making it and
       | publishing (Kikuo Johnson). With the advent of generative AI art,
       | it seems immoral from a fiduciary responsibility point of a view
       | that an art director _doesn 't_ train an AI model on their
       | illustrator's art before laying them off.
       | 
       | I have no doubt that generative AI will continue to push forward
       | irrespective of the legal arguments being made. I'm fearful for
       | the frictional unemployment that comes. Having come from art
       | school (and luckily working in tech), my illustration peers are
       | creative but such creativity doesn't necessarily translate into
       | creative use of tooling, business-savyness or marketing. All I
       | can say is that I empathize with a lot of the fear and hope for
       | the best.
        
         | t-writescode wrote:
         | Where do you get $100-1000 in cost for a major publication? The
         | artists I know easily charge in that range, and that's for
         | private commissions that they retain the rights over.
         | 
         | I could not imagine the custom work done that helps sell
         | publications, *especially MTG or similar* only net $1000 per
         | piece. Certainly they have some sort of royalties contract, at
         | the very least.
        
         | baq wrote:
         | It's a quite dystopian technology for sure, but maybe it isn't
         | all that bad. the artists _just_ (cough) have to pivot from
         | delivering images to delivering style sheets for generative
         | models...
        
         | blueblimp wrote:
         | > With the advent of generative AI art, it seems immoral from a
         | fiduciary responsibility point of a view that an art director
         | doesn't train an AI model on their illustrator's art before
         | laying them off.
         | 
         | If they do that, the quality of illustrations they'll get will
         | be vastly worse (as can be seen from the comparisons in the
         | article). If they were willing to spend $1000 on an
         | illustration in the first place, I doubt they'd accept that
         | quality drop.
        
       | detritus wrote:
       | It managed that from just 32 example illustrations?
       | 
       | Irrespective of everything else here, I find that alone hugely
       | impressive.
       | 
       | ed - for 'hugely impressive' also insert 'slightly terrifying'.
        
       | mring33621 wrote:
       | this whole argument is bullshit
       | 
       | artists are no more a protected class than programmers
       | 
       | learn to use the automation to further your own goals/career
       | 
       | or move out of the way
       | 
       | At some point, some non-artist, non-director, non-screenwriter
       | will make a brilliant movie using these ML-assisted technologies.
       | After few iterations of this success, most people will shut up
       | and learn to harness the benefits.
        
       | djoldman wrote:
       | It's interesting to think about all the controversy surrounding
       | machine-generated images in contrast with a scenario where the
       | same images were generated/drawn/created directly by a human.
       | 
       | In this specific scenario, what would the artist think if 1000
       | people started drawing in her style and released those images?
        
         | michaelt wrote:
         | Sometimes society established norms when doing a thing was very
         | rare and expensive, and honestly barely needed any control
         | because it was so unusual.
         | 
         | If that thing becomes a lot cheaper and easier, and it starts
         | being done a lot more - perhaps we realise we actually need
         | different rules _even though the difference is purely
         | quantitative_.
         | 
         | One guy juggling chainsaws on main street - how entertaining!
         | No need to ban that. Dozens of chainsaw jugglers on every
         | street, 24/7, the issues are easy to imagine.
         | 
         | To say that "Oh, you were OK with one chainsaw-juggler
         | therefore you would be OK with 100,000" is IMHO a line of
         | argument that obscures more than it clarifies.
        
         | mdaEyebot wrote:
        
         | burkaman wrote:
         | It's ok to have different standards for humans and computers.
         | Even if you think that a machine learning model is conceptually
         | doing the same thing as a human artist, just a trillion times
         | faster and infinitely replicable, there's no reason we can't
         | say that it's ok for humans to do this, but not for computers.
         | Computers are not people. It's not unfair or unethical to put
         | an artificial limitation on an artificial object.
        
           | SkyBelow wrote:
           | >It's ok to have different standards for humans and
           | computers.
           | 
           | Is it? The computer is a tool doing it on behalf of a human,
           | so different rules for computers ends up being different
           | rules for using different tools. Should the efficiency of a
           | tool be a factor in the limits we put on a human? Given that
           | humans can use the same tool with different levels of
           | efficiency, this also seems to open up to the question if
           | different levels of skill using a tool have different rules.
        
             | cezart wrote:
             | I can move on the street om my feet, or I can move on the
             | street with 240km/h with the use of a tool called a sports
             | car. I think limits on the capacity of our tools are often
             | the whole difference between what is legal and what is
             | illegal.
        
             | burkaman wrote:
             | > Should the efficiency of a tool be a factor in the limits
             | we put on a human?
             | 
             | Yes, for example hitting someone with a real sword is
             | punished more harshly than hitting them with a plastic one.
             | Tweeting out blatant lies to your 100 million followers is
             | worse than tweeting out blatant lies to 2 followers.
             | Shining a laser pointer at a plane is only illegal if it's
             | strong enough to blind the pilot.
        
               | LastTrain wrote:
               | The question is should we have different standards for
               | computers and human doing the exact same thing - not
               | different things. Your examples all have different
               | outcomes based on the tool.
        
               | burkaman wrote:
               | Do you think training a human artist on a body of work
               | has the same outcome as training Stable Diffusion on that
               | same body of work?
        
           | dymk wrote:
           | Seems like putting a limitation on what someone can do with
           | their own computer on their own time with their own money.
        
             | burkaman wrote:
             | Yes, sort of like disallowing someone to DDoS a server
             | using their own computer on their own time with their own
             | money.
             | 
             | This is not about one person playing around in private,
             | it's about thousands of people (potentially millions)
             | instantaneously generating art expressly intended to copy
             | someone's specific style and publicly releasing the
             | results.
        
               | googlryas wrote:
               | If you're ddosing someone's server you necessarily aren't
               | playing around in private.
        
               | asddubs wrote:
               | if you release the artwork you generate, you aren't
               | playing around in private either.
        
               | burkaman wrote:
               | If people were only playing around with Stable Diffusion
               | in private then this article wouldn't exist and we
               | wouldn't be having this conversation.
        
               | googlryas wrote:
               | Saying "here's an image I generated" is exactly the use
               | case of reddit. DDoSing reddit is not the use case of
               | reddit.
               | 
               | And "private" has more meanings than "secret/only
               | personally known". Posting something you generated on
               | Reddit for fun is private use.
        
               | burkaman wrote:
               | Tough to argue with that, all I can say is that your
               | usage of the word "private" does not agree with either
               | the dictionary definition or common usage.
        
               | googlryas wrote:
               | Look up the term with relation to copyright law. See
               | also: personal use, fair use.
               | 
               | e.g. (pdf) https://digitalcommons.sacredheart.edu/cgi/vie
               | wcontent.cgi?r...
        
               | burkaman wrote:
               | That link does not define the term, and I haven't been
               | able to find a legal definition of "private use". The
               | paper seems to be using the normal definition, though.
               | 
               | > It sounds controversial that a defense of private use
               | exists at all; after all, one usually buys a book for her
               | private use. This use may mean that one can make
               | photocopies of a legally possessed book, in order to read
               | it, for example, not only in the office, but also at
               | home. One may also loan the book to a friend.
               | 
               | I would be pretty surprised to find a legal definition
               | that says if you put something online just for fun it
               | doesn't count as copyright infringement.
               | 
               | I don't know why we're talking about this though, I'm not
               | a big fan of copyright but that's not what this is about.
               | It's not "you put my drawing on Reddit without my
               | permission", it's "you publicly released a tool on Reddit
               | that allows anyone to effortlessly create infinite
               | variations of my work in my name, please don't do that".
               | I don't care whether or not it's legal, I think it's
               | immoral to create a tool that could not exist without
               | ingesting someone's life's work and then ignore them when
               | they ask you not to do that.
        
             | WaxProlix wrote:
             | There are plenty of those limitations already, some even
             | having to do with things like IP and copyright.
        
               | cthalupa wrote:
               | I don't see any evidence that things being generated with
               | SD or similar somehow remove those same limitations
               | around IP and copyright. I am just as much hoping Disney
               | looks the other way when I make fanart of their IP
               | regardless of how it is made.
        
         | dagmx wrote:
         | You're missing the big qualifier: what if those other artists
         | started drawing in her style AND marketed it as her style?
        
           | randyrand wrote:
           | There's nothing atypical about that in the graphic design
           | world.
           | 
           | An artist does not own "their" style, and can even be one of
           | thousands of people drawing in that same style.
           | 
           | Art styles are not something people can own.
           | 
           | Imagine trying to own a genre of music...
        
       | godelski wrote:
       | I'm an AI researcher working on generative modeling, so I want to
       | note my bias upfront.
       | 
       | But I'm also a musician/artist and so I find some of these
       | conversations odd. The problem with them I see is that they are
       | oversimplified. To get better at drawing I often copy other
       | works. Or I'll play a piece exactly as intended. Then I get more
       | advanced and learn a style of someone I admire and appreciate.
       | Then after that comes my own flair.
       | 
       | So I ask, what is different between me doing it and a machine?
       | The specific images being shown in this article shows that Hollie
       | is doing the same process as me and the machine. Their work is in
       | fact derivative. There's absolutely nothing wrong with that
       | either. They say a good artist copies but a great artist steals.
       | I don't think these generators are great artists, but they sure
       | are good ones.
       | 
       | I learn by looking at a lot of art and styles. I learn by
       | copying. I am able to be more efficient than the machine because
       | I can understand nuance and theory, but the machine is able to do
       | the former much better than I can. It can practice its drawings a
       | hundred or thousand times an hour.
       | 
       | Now there are highly unethical stuff that is actually going on
       | and I don't want the conversation getting distracted from. Just
       | today a twitter account posted a demo of their work and actively
       | demonstrated that one can remove watermarks with their tool[0].
       | This is bold and borderline illegal (promoting theft). There are
       | also people presenting AI generated work as human digital
       | paintings (we need to be honest about the tools we use). People
       | presenting work in ways that it was not actually created is
       | unethical. But there are other generative ethical concerns.
       | 
       | Now there are concerns about photographer's/artist's rights. If I
       | take someone else's work and post it as my own, that is straight
       | up theft. Even celebrities can't post photos of themselves that
       | were taken by others[1]. This gets muddled if I make minor
       | changes but it's been held up in court that the intention matters
       | and it needs to be clear that the changes were new in an artistic
       | manner and not a means of circumventing ownership rights. These
       | are some of the bigger issues we're running into.
       | 
       | A problem with these generative models is interpretation. How do
       | you know if the image you produced actually exists in the wild or
       | if it is new and unique? There's been papers that show that there
       | are privacy concerns[2] and that you can pull ground truth images
       | out of the generator. I'd argue that this question is easier to
       | answer the more explicit the density your model calculates.
       | Meaning that this is very hard for implicit density models (such
       | as GANs), moderately difficult for approximate density models
       | (such as diffusion and VAEs), but not too bad when pulling from
       | explicit density models (such as Autoregressive or Flow based
       | models).
       | 
       | This is a concern that is implicit by articles such as this, but
       | fail to actually quantify the problem here: "How do we
       | meaningfully differentiate generated images from those made by
       | real people?" I'm a strong advocate for the study and research
       | for explicit density models, but a good chunk of the community is
       | against be (they aren't currently anywhere near as powerful, but
       | there's nothing theoretically in the way. I'd argue it is that
       | few people are researching and understanding these models. There
       | is a higher barrier to entry). So I'd argue that the training
       | methods aren't the major concern, but what is actually produced.
       | While the generators learn in a similar fashion to me it is clear
       | that I'd get in trouble if I was passing off a fake Picasso as a
       | legitimate one. But it is also fine for me to paint something in
       | that same style as long as I'm honest about it.
       | 
       | The nuance here really matters and I think we need to not lose
       | sight of that. This is a complex topic and I would like to hear
       | other views. But I'm not interested in mic drops or unmovable
       | positions. I don't think anyone has the right answer here and to
       | solve it we must get a lot of different view points. So do you
       | agree or disagree with me? I especially want to hear from the
       | latter.
       | 
       | [0] https://twitter.com/ai_fast_track/status/1587475575479959559
       | 
       | [1] https://collenip.com/taylor-swift-entitled-say-
       | photographers...
       | 
       | [2] https://arxiv.org/abs/2107.06018
        
         | aqsalose wrote:
         | >But I'm also a musician/artist and so I find some of these
         | conversations odd. The problem with them I see is that they are
         | oversimplified. To get better at drawing I often copy other
         | works. Or I'll play a piece exactly as intended. Then I get
         | more advanced and learn a style of someone I admire and
         | appreciate. Then after that comes my own flair.
         | 
         | >So I ask, what is different between me doing it and a machine?
         | 
         | You are a human. If you practice art as a hobby you can feel
         | pleasure doing it, or you can get informal value out of the
         | practice (there is social value in showing and sharing hobbies
         | and works with friends). One could try to formalize that value
         | and make a profession out of it, get livelihood selling it.
         | 
         | When all that "machinery" to (learn to) produce artistic works
         | was sitting inside human skulls and difficult to train, the
         | benefits befell on the humans.
         | 
         | When it is a machine that can easily and cheaply automate ...
         | the benefits are due to the owner of machine.
         | 
         | Now, I don't personally know if the genie can be put back
         | _into_ bottle with any legal framework that wouldn 't be
         | monstrous in some other way. However, ethically it is quite
         | clear to me there is a possibility the artists / illustrators
         | are going to get a very bad deal out of this, which could be
         | ... a moral wrong. This would be a reason to think up the legal
         | and conceptual framework that tries to make it not ... as wrong
         | as it could be?
         | 
         | It could be that we end up with human art as a prestige good
         | (which it already is). That wouldn't be nice, because of power
         | law dynamics of popularity benefit very few prestige artists,
         | which could get worse. But could we end up with a Wall-E world
         | where there are no reason for anyone to learn to draw any well?
         | When a kid asks "draw me a rabbit", they won't ask any of the
         | humans around when the machine can produce a much more prettier
         | rabbit, immediately and tailored...?
        
       | vanadium1st wrote:
       | I am a graphic artist. In the recent months I've read dozens of
       | articles and threads like this. I still can't see what the big
       | deal is.
       | 
       | Graphic artists don't have trade secrets or unique impossible
       | techniques. If someone can see your picture, he can copy its
       | style. It becomes publicly available as soon as you publish it.
       | For the vast majority of graphic styles, if one author can do it,
       | then hundreds of his colleagues can do it too, often just as
       | well. If one author becomes popular and expensive - then his less
       | popular colleagues can copy his style for cheaper. The market for
       | this is enormous and this was the case for probably hundreds of
       | years.
       | 
       | I personally am a non-brand artist like that. More often then not
       | clients come to me with a reference not from my portfolio and ask
       | me to produce something similar. I will do it, probably five
       | times cheaper than the artist or studio who did the original. It
       | may not be exactly as good, but it won't be five times worse.
       | 
       | Some clients are happy to pay extra for the name brand, and will
       | pay. Some want to spend less, and will settle for a non-brand
       | copy.
       | 
       | The clients that are willing to pay for the name brand will still
       | be there for the same reason they are now, and the existence of
       | Stable Diffusion changes nothing to them. And the ones that just
       | want the cheap copy would never contact the big name artist in
       | the first place. The copy market will shift, but the big name
       | artist doesn't even have to be aware of it.
        
         | [deleted]
        
         | xena wrote:
         | The main thing people are worried about is the fact that food
         | costs money and you need to eat in order to live. People are
         | afraid that their illustration jobs are at risk because of AI
         | illustrations being _good enough_.
        
           | Matumio wrote:
           | I dislike the drift this "need to work for food" phrase that
           | I'm hearing so often. Job automation never reduced our
           | ability to produce food. The harvest is not in any danger,
           | not even if we suddenly produce twice the art with the same
           | amount of work.
        
             | diputsmonro wrote:
             | It's not about food production, it's about capitalism.
             | 
             | If artists could simply ask for food and be given it from
             | the overflowing cornucopia, then yes, this wouldn't matter
             | and in fact would be a net benefit.
             | 
             | Unfortunately though, artists must sell their art to get
             | money, then exchange that money for food. Now, if a robot
             | produces free art that's almost as good, most of those
             | buyers won't pay those artists anymore, and the artists
             | will starve (or stop being artists).
             | 
             | I do believe that job automation will quickly eliminate
             | scarcity for basic life necessities, while also displacing
             | more and more jobs in our economy, and that therefore UBI
             | or some equivalent will be imminently necessary - but
             | that's a much larger topic
        
             | egypturnash wrote:
             | Hi. I'm a professional artist. I have a lot of friends who
             | are also professional artists.
             | 
             | Most of us live in cities, and go to the store to buy food.
             | We have specialized in being good at making images, which
             | we trade for money, which we can trade for other goods and
             | services such as "food" or "entertainment" or "rent".
             | _Some_ of us are doing well enough to have room for a
             | garden, and the time to tend it. This is by no means the
             | majority.
             | 
             | How many of your peers would know one end of a modern
             | combine harvester from the other? Probably very few, if you
             | live in the city.
        
           | bombcar wrote:
           | I wonder how many illustrators already lost their jobs once
           | clipart took off starting in the 90s.
           | 
           | Many newsletters/newspapers of bygone era had an
           | artist/doodler to do little sketches which got replaced by
           | clipart in many cases.
        
           | passion__desire wrote:
           | If that is the main issue, why are artists hiding behind the
           | pique that "when I create art, it is full of soul,
           | experience, blood and sweat" Just say that you need a way to
           | make money and these models are replacing us.
        
             | hypertele-Xii wrote:
             | Because that argument is as old as the written word, and it
             | works exactly as well now as every time in the past (not at
             | all).
             | 
             | Every job being automated requires its own "we're unique"
             | pitch to get any pity points.
        
           | vanadium1st wrote:
           | This is a hundredth time in history when technology
           | progressed and artists had to learn new ways to make money.
           | I've learned art in art school - none of the jobs that my art
           | teachers had in their youth are relevant right now. The
           | tools, the pricing, the workflow, the clients requests and
           | expectations are all different. You can keep some of the
           | skill, but you still need to learn and adapt to the new
           | reality. Sometimes it takes 5, sometimes 15 years, but the
           | job of the artist is always transforming.
           | 
           | The illustrator from the article is probably drawing in
           | Procreate with an Ipad. Probably doing her promotion on
           | social media and doing business with her clients remotely.
           | All of those are recent technological advancements that
           | appeared in her lifetime and completely outperformed the
           | previous way to do commercial illustration. Illustrators that
           | worked before that had to learn those new ways, or lose their
           | jobs. This happened dozens of time in history. Now is the
           | turn for current illustrators to adapt.
        
             | diputsmonro wrote:
             | But none of that addresses fundamental changes to the
             | market structure. How can a beginner artist possibly get
             | traction in a marketplace where people only pay for premium
             | names or pay dirt for beautiful art that's 90% of what they
             | want. You said yourself most clients are willing to settle
             | if the price is right, and you can't really beat free.
        
         | geoelectric wrote:
         | I see a pretty clear analogy to the various industries that
         | felt threatened by home video and audio recording improving to
         | the point of being able to make copies quickly and without
         | significant degradation--particularly when disc ripping at 20x+
         | became a thing and time wasn't even a barrier.
         | 
         | A person who can clone a style and crank out illustrations at
         | human speed is a very different thing than an automated process
         | that can do it immediately on request, in minutes or seconds.
         | If nothing else, the latter is a huge efficiency gain for being
         | able to self-serve, as it would allow an editor to trial
         | different illustrative approaches without all the back and
         | forth contracting out to a human would require.
         | 
         | Personally, I think what this will do most is convince artists
         | not to put galleries of their work suitable for training
         | online.
         | 
         | The Redditor identified in the article posted a new comic art
         | model based on James Daly III (this is mentioned at the end of
         | an article with a link). The Redditor's comment in that post
         | implies Daly was chosen specifically because they had a gallery
         | of easily consumable training images all in one place.
         | 
         | I have no idea what the minimum effort would be to make the
         | images less useful for training, but I foresee a lot of
         | obnoxious watermarks in our future as people try to do so.
        
       | mwlp wrote:
       | In sophomore year of college, I came across a Japanese
       | illustrator who sold sticker packs for messenger apps. I liked
       | their art style a lot, to the point where I themed my entire
       | computer setup (wallpaper/terminal/editor) around their work.
       | Some friends and I worked on a project for an information
       | retrieval class and their art continued to be a centerpiece for
       | the website's theme, and we even jokingly snuck an image into our
       | final paper.
       | 
       | As much as I loved their style, it seemed they rarely put out new
       | content. A few weeks ago I trained a SD embedding on their work--
       | it was the coolest thing ever. I thought back to the class
       | project, how I "stole" one of their artworks to use as the
       | favicon. "Nobody outside this class will see this", I thought.
       | But a pixel art anime girl wearing headphones was perfect
       | branding for the app. "If I ever decide to publish this project,
       | maybe I'll commission the artist for an official logo", I thought
       | at the time. Now I wonder if I'd just use the SD embedding...
        
         | KaoruAoiShiho wrote:
         | Curious who the illustrator is?
        
           | mwlp wrote:
           | https://twitter.com/mhug_ https://mhug.tumblr.com/
        
         | kwhitefoot wrote:
         | And after all those words you still haven't credited the
         | illustrator.
        
           | mwlp wrote:
           | I'm sure with a bit of scrolling you can find it. Just a few
           | more words.
        
       | throwaway12245 wrote:
       | Thing I don't get is people post on public internet and then
       | somehow expect that that information is somehow protected from
       | uses that they don't want. If you don't want your data mis-used,
       | don't post it on the internet.
        
         | ketzo wrote:
         | Well, this is a weird argument. There are lots of things that
         | are posted on the internet that are illegal to re-post
         | elsewhere without attribution/payment. That's, like... all of
         | internet copyright law.
        
         | ljf wrote:
         | Where did she post the images she created?
        
       | gdubs wrote:
       | Feels like there's a difference between artists as drops in a
       | vast ocean of training data, vs explicitly creating a model on
       | one person's work. And I think the conversation would benefit
       | from not conflating the two.
       | 
       | I'm sort of a copyright 'moderate' I suppose. I think people
       | should get paid for their work, and trying to just rip-off a
       | single person's style (and I'm not at all saying this particular
       | example was nefarious in intent) just feels gross. But I also
       | think too much baggage and we stifle new ideas an innovations.
       | 
       | However, I also think that most of the conversation around large
       | models like StableDiffusion lack an understanding of how these
       | models actually work. There's this misconception that they're a
       | kind of 'collage machine'. The contribution of individual artists
       | in these base models are like drops in a vast vast ocean. [edit:
       | I repeat myself; recovering from Covid, forgive me.] They take
       | this incredibly large set of digitized human creativity, and in
       | turn we all get this amazing tool: a synthesizer for imagination.
       | 
       | Anyway, just my personal opinion. It's become a very 'us vs
       | them', lines-in-the-sand argument these days, and it'd be great
       | if the conversation could be less heated and more philosophical.
        
         | ROTMetro wrote:
         | Hey, we want to use machines to steal people's creative work,
         | take away their jobs in the future, and create more algorithm
         | generated garbage since it worked so well for news sites and
         | promoting videos/posts/social media. We also want to pretend
         | that computer generated graphics are created by an 'artist' for
         | the sole purpose of being able to assign copyright to our
         | machine works, but be able to ignore artists and copyright in
         | every other way.
         | 
         | Why do people seem to have a strong opinion against this? It's
         | just stealing the most intimate thing humans create (art) so
         | that we can create soulless algorithmic versions of it for our
         | own uses because evil artists won't let us just do what we want
         | with their works and won't let us profit on their work. Artists
         | are jerks.
        
           | shadowgovt wrote:
           | I can't quite follow what you're saying due to your choice of
           | subjects. "It's just stealing the most important thing humans
           | create so we can create soulless algorithmic versions of it
           | for our own uses..." Who is the 'we' there if not 'humans?' I
           | can tell you the machines do not care one way or the other
           | about art.
           | 
           | What we're looking at is the ability for humans _who have not
           | trained to be artists_ being able to create something similar
           | to trained output. That 's what technology does: augment
           | human capacity to do something. It's what it always does.
        
           | yieldcrv wrote:
           | > We also want to pretend that computer generated graphics
           | are created by an 'artist' for the sole purpose of being able
           | to assign copyright to our machine works
           | 
           | The current IP rights system allows for work for hire,
           | assignments. Reframing it to "I commissioned an AI"
           | obliterates any debate about who owns what. When you
           | commission the rights of what is created is assigned to you,
           | for this machine model, just add that clarity in the TOS and
           | its a done deal.
        
           | leereeves wrote:
           | Creating art similar to another artist's work isn't stealing
           | when humans do it, why would it be stealing when machines do
           | it?
           | 
           | (Unless the result is too similar, like the Wonder Woman
           | image generated by Stable Diffusion shown in the article, in
           | which case it's "stealing" whether created by human or
           | machine.)
        
             | manholio wrote:
             | Because machines are not humans, and shouldn't enjoy the
             | benefits of fair use.
             | 
             | Copyright law exists to balance the interests of the people
             | that make up the society, not some abstract caricatural
             | embodiment in algorithmic form.
        
               | shadowgovt wrote:
               | Should humans using machines enjoy the benefit of fair
               | use?
        
               | manholio wrote:
               | Sure, if you build your own model, train it on
               | copyrighted works, then use it to create art; or if you
               | use someone else's model which properly license its
               | copyrighted sources, and use that to create art. In both
               | cases your output is a new creative work sufficiently
               | different from its parents to not constitute infringement
               | and enjoy its own copyright protection.
               | 
               | However, the model creator/distributor will never be able
               | to claim fair use on the model itself, which is choke
               | full of unlicensed material and can only exist if trained
               | on such material. It's not really a subtle or
               | particularly difficult legal distinction, in traditional
               | terms it's like an artistic collage (model output) vs a
               | database of copyrighted works (trained model).
               | 
               | The trained model _is not_ a sufficiently different work
               | that stands on its own, in fact it is just a compressed
               | algorithmic representation of the works used to train it,
               | legally speaking _it is_ those works.
        
               | shadowgovt wrote:
               | In what way is the model chock full of unlicensed
               | material? It was trained on unlicensed material, but I
               | don't think you're ever going to be able to find a
               | forensic auditor who can tease individual works out of
               | the weights in a model.
               | 
               | You can't reasonably assert that a model encodes
               | individual works of copyrighted material in any way
               | meaningful for copyright. Not without a change to the
               | law.
        
               | manholio wrote:
               | Obfuscation is not a valid defense against copyright
               | infringement. If my database contains full encrypted
               | copies of unlicensed works and I distribute keys to my
               | customers for parts of those works, no forensic auditor
               | will ever prove the full extent of my infringement
               | without learning the full keyset. But I would argue that
               | reproduction of even a single instance of an unlicensed
               | non trivial fragment of a copyrighted work would taint
               | that entire database.
               | 
               | In the same way in AI a crafted prompt that creates
               | striking similarities to a well known work, like this
               | example here, is suficient proof that the model embeds
               | unlicensed works; using a copyrighted work for training
               | models is just another form of commercial exploitation
               | that the original author should be compensated for.
        
         | fragmede wrote:
         | Misconception? I mean, if it talks like one and quacks like a
         | collage machine, that's kinda what it is. It's just using an
         | infinite (well, 4 GiB) magazine reel to cut out from.
        
         | soraki_soladead wrote:
         | > Anyway, just my personal opinion. It's become a very 'us vs
         | them', lines-in-the-sand argument these days, and it'd be great
         | if the conversation could be less heated and more
         | philosophical.
         | 
         | It's heated and less philosophical because many artists are
         | worried about their livelihood while a multi-billion dollar
         | company is working towards making them obsolete often using
         | their own work.
         | 
         | I don't understand the confusion people have towards this
         | issue.
        
           | lbotos wrote:
           | > many artists are worried about their livelihood while a
           | multi-billion dollar company is working towards making them
           | obsolete often using their own work.
           | 
           | You do realize that most commercial art is "art for hire" and
           | in this very story some of the examples trained were not
           | owned by the artist.
           | 
           | Multi-billion dollar companies _already do this_. Hire artist
           | to draw Corporate IP. Corp owns and can do whatever they want
           | with it. Maybe they hire that artist again. Maybe they hire
           | someone else and they share the work in a reference folder.
        
             | allturtles wrote:
             | But now they (maybe) get to skip the "hire arist" step.
             | Which, from the point-of-view of the artists, is the most
             | important part.
        
             | PuppyTailWags wrote:
             | The distinction I think is that the multi-billion dollar
             | company working towards making artists obsolete by using
             | their own work didn't pay any of these artists for that IP.
             | At least with hiring an artist to draw corporate IP, an
             | artist has to relinquish their rights to that work
             | explicitly and are paid for those rights.
        
           | fragmede wrote:
           | Are we talking about Stable Diffusion or GitHub Copilot here?
        
             | soraki_soladead wrote:
             | I'm not convinced there's a difference.
        
         | yieldcrv wrote:
         | > Feels like there's a difference between artists as drops in a
         | vast ocean of training data, vs explicitly creating a model on
         | one person's work.
         | 
         | I don't.
         | 
         | Stability.ai did the big training sets and it is coincidence
         | that it remembers the names and categories accurately.
         | 
         | If you want to leverage this tool more fine tuned then you add
         | these modules with more accurate naming.
         | 
         | I would be for some way to compensate artists, like if the use
         | of a module gave them a royalty, but I don't think it is an
         | ethical, legal, or social norm to enforce. If it happens in a
         | uncircumventable way, I would be for it. If it doesn't, I'm for
         | that too.
        
         | MichaelCollins wrote:
         | Screw one person? A great offense. Screw _lots of people at
         | once_? A great innovation.
        
           | leereeves wrote:
           | Great innovations do tend to "screw lots of people". Cars put
           | most buggy makers out of business. Light bulbs, candlemakers.
           | The computer, human computers. The internet, journalists.
           | 
           | AI promises to be a particularly disruptive innovation, but I
           | don't think anyone can or will stop it. Instead, we should
           | think about improving society so that the promise of AI
           | benefits everyone and not simply a select few.
        
           | meowface wrote:
           | You have a point, but it's also how art works in general.
           | Most artists draw inspiration from a conglomeration of
           | hundreds/thousands of other artists. If an artist draws
           | inspiration from one and only one artist, they're just a
           | plagiarist.
           | 
           | (Not necessarily saying that clears up any potential legal or
           | ethical issue with generative image models training on
           | artists' work.)
        
             | itronitron wrote:
             | Why do you think artists draw inspiration from other
             | artists?
        
               | axus wrote:
               | An artist who's never seen art is like an AI with no
               | training data
        
             | lancesells wrote:
             | But these aren't artists and this isn't art. This is fast
             | food. This is content for articles and technology for
             | startups to become the middelman for yet another thing.
             | 
             | There will be art coming from these tools at some point but
             | right now it's creatively bankrupt illustrations driven by
             | curiousity and lots of creatively bankrupt people.
        
               | shafoshaf wrote:
               | Yet. As "real" artists start using these generations to
               | leapfrog their projects, at some point, how is this
               | different from someone studying a style and producing
               | something in that style with that addition of actual
               | "art?"
        
               | itronitron wrote:
               | That's like expecting a chef to take inspiration from a
               | fast food menu. It might happen but it's more likely that
               | the chef already knows what makes fast food taste good
               | and can develop a new menu based on their fundamental
               | knowledge.
        
               | t-writescode wrote:
               | The fundamental knowledge they gained by looking at
               | thousands upon thousands of previous bits of information;
               | and, after today, that collection of insights is going to
               | include auto-generated artwork, as well.
        
         | mordae wrote:
         | In the end, you are not going to be paid for drawing everything
         | all over again. You are going to get paid for devising a unique
         | style to match the creative vision of the movie, series, book
         | or whatever and for tweaking the machine outputs.
         | 
         | There is a chance that style will become copyright-able. Well,
         | eventually common style banks will appear.
        
         | manholio wrote:
         | It doesn't really matter: the AI model is a compressed
         | representation of copyrighted works processed by a generic,
         | non-artistic algorithm specifically engineered to extract the
         | artistic features of such works.
         | 
         | Conceptually, it's very similar to running lossy JPEG over a
         | single copyrighted work versus a giant folder of works from
         | different artists, then shipping that compressed collection to
         | you customers so they cut and paste sections of those works in
         | their collages. The output your customers create with this tool
         | might be sufficiently original to warant copyright protection
         | and fair use, but your algorithmic distribution of the original
         | works (the model) is a clear copyright violation, it doesn't
         | matter if it affects a single artist or thousands.
        
           | johnthewise wrote:
           | How do you copyright a style though?
        
           | lbotos wrote:
           | > your algorithmic distribution of the original works (the
           | model) is a clear copyright violation
           | 
           | Depends on if it's considered "transformative" enough. If
           | Google can cache thumbnails for search, how is an AI model
           | not a "search database+algorithm?"
           | 
           | That said, I do expect that SD will be shut down for
           | copyright infringement at some point because you are right,
           | the model does have a bunch of copyrighted material in it and
           | Disney's lawyers will probably come swifter and more prepared
           | than the defense.
        
             | manholio wrote:
             | As a general rule, the fair use thumbnails enjoy is very
             | limited, only in certain jurisdictions, and only for very
             | specific use-cases.
             | 
             | An universal art-production machine that can compete in the
             | market place with the original artist - and indeed crush
             | them on productivity and price - certainly does not qualify
             | as fair use.
        
           | treis wrote:
           | >AI model is a compressed representation of copyrighted works
           | 
           | But it's not because you can't get a copy of the copyrighted
           | works back out.
        
         | simion314 wrote:
         | I think painters also copy each other styles , or make copy of
         | popular works, artists don't like this but I did not seen
         | people demanding "do not use this art style because XYZ created
         | it",
         | 
         | For me, just a regular person, the art is not something an
         | soulless machine can generate or a monkey with a camera, the
         | intent and mind is important. So a guy can ask an AI to create
         | a malformed portrait of some subject with some artist style ,
         | the value of the art is in the subject,theme and not the style
         | IMO. Like if you ask for a portrait of X steping on Y dead
         | body, dressed in Z and with M,N,P the artistic value is in your
         | idea behind this and not in the pixels.
         | 
         | I remember similar complaints when digital art started to get
         | popular, that is not real art , that you just move pixels
         | around
        
       | i_like_apis wrote:
       | This Hollie Mengert's style is (nice, but) not at all original.
       | There are thousands of cartoons that look exactly like this. You
       | could never even tell that anything like this is similar to "her"
       | style.
       | 
       | But, even if she did have a distinctive style, there is nothing
       | illegal or unethical about learning that style and producing your
       | own similar artwork, whether you call it "in the style of", or
       | not.
        
         | TMWNN wrote:
         | >This Hollie Mengert's style is (nice, but) not at all
         | original. There are thousands of cartoons that look exactly
         | like this. You could never even tell that anything like this is
         | similar to "her" style.
         | 
         | Good point. As the article itself says, her style is explicitly
         | based on Disney. This isn't like, say, Cubism, a style
         | (intentionally) very different from the contemporary norm, and
         | nothing like anything that had come before.
        
         | rosywoozlechan wrote:
         | > not at all original.
         | 
         | > But, even if she did have a distinctive style
         | 
         | This is some rough criticism of an artist who's dedicated their
         | profession and life to art.
         | 
         | > is (nice, but)
         | 
         | This made it all OK!
        
           | stavros wrote:
           | I don't know this specific artist, but the fact that someone
           | dedicated their profession and life to art doesn't mean
           | they're necessarily good.
        
             | rosywoozlechan wrote:
             | I didn't make this claim that someone would be good just
             | because of the amount of time spent, the claim I made that
             | is harsh criticism to make when they have.
             | 
             | For example if you just started strength training, have
             | been at it for just a few weeks and someone said "you're
             | not very strong" that's not harsh criticism because you
             | just started, but if you've been strength training for
             | 10-20 years and someone said "you're not very strong" that
             | criticism hits different right?
             | 
             | Does it make sense what I am saying now?
             | 
             | This artist will probably read these HN comments, and I'm
             | struck by how cruel the comments are to someone who's just
             | out there creating wonderful content, didn't ask for any of
             | this, so many of you here do not care about how she feels.
             | 
             | I also call into question the ability of these HN
             | commenter's competence at critiquing art.
        
               | stavros wrote:
               | Ah, yes, that clarifies it, thank you. I agree.
        
               | i_like_apis wrote:
               | It's not an art critique.
               | 
               | If she reads it, hopefully she understands the lens of
               | the language. Her style is good, and evidently assiduous.
               | 
               | Everything said remains relevant.
        
             | bombcar wrote:
             | Or they could dedicate years and be good (or great!) and
             | still not unique.
             | 
             | In fact, the better they get the less likely they are to be
             | unique as more people will imitate their style.
        
             | jfk13 wrote:
             | Does whether they're good or not actually have any bearing
             | on the legality or morality of copying their work?
        
           | i_like_apis wrote:
           | It's not rough. Her art is nice. And I've also seen a lot of
           | other work that looks like it. Years dedicated to the
           | profession or not.
           | 
           | Maybe she's closer to the origin of the style than most?
           | 
           | Anyway I spent all my life writing code and I'm not upset
           | when someone uses patterns I came up with, especially if they
           | credit me.
        
       | Waterluvian wrote:
       | My dad once told me about when he was a kid, some Christmas
       | specials would only be played a few times on CBC right before
       | Christmas. If you missed them, you had to wait a year. He said
       | that became a tradition.
       | 
       | When he went to share his tradition with us, he was a bit
       | bothered by the idea of "get it on VHS!" He actually protested
       | the idea. "I didn't want you watching it at any time. It should
       | be a tradition." And indeed it became a tradition for my brothers
       | and I.
       | 
       | My kids are 3 and 5 and "Nightmare Before Christmas" was a huge
       | hit last Halloween. My kids wanted to watch it 3 or 4 times that
       | Halloween week. It's available for streaming so that was easy.
       | But then they watched it in November. And January. And a bunch in
       | the spring and summer. ...And they never asked for it this year.
       | 
       | There's a quality bestowed through scarcity. When you have it all
       | the time, on-demand, there is no scarcity. When you can generate
       | art instantly, in any style, of anything, I think it stops being
       | exciting. For example, I don't think the concept of "ahahaha
       | look, it's the Avengers but as Muppets!" or "It's me, but as a
       | Simpson's character!" will have any amusement value by next year.
       | 
       | Maybe I should be asking Gene Roddenberry but I'll ask all of
       | you: do you think there's something lost by eliminating scarcity?
        
         | gopalv wrote:
         | > If you missed them, you had to wait a year.
         | 
         | This is literally the intro section to Chuck Klosterman's
         | Nineties - about the number of people who watched Seinfeld and
         | those that missed it, just missed it.
         | 
         | > There's a quality bestowed through scarcity.
         | 
         | There's a depth to scarcity, but your kids are going to watch
         | way more things growing up than you did and you have no idea
         | what that means for them.
         | 
         | I get that you aren't able to pass on that "value the thing you
         | hold, not the two you can get later" sort of commitment to a
         | single thing.
         | 
         | But that's valuing the only true scarce thing left in my life
         | at least - my time and attention.
         | 
         | I remember feeling the "post-scarcity" thing going to a library
         | for the first time in the US.
         | 
         | I used to buy books and read them, before I came to the US.
         | Each book was a hard choice on whether to spend money on it or
         | buy something else.
         | 
         | I now decide to read through a book based on whether I want to
         | spend the time, not the money.
         | 
         | Unless I'm going to be immortal, there's no fixing that
         | scarcity (and I get how young people/kids don't get that part
         | at all - their life is endless from where they are).
         | 
         | Maybe do a little better so that my 70s aren't spent in a bed,
         | but in a park outdoors reading.
        
           | Waterluvian wrote:
           | I really appreciate your response because it argues with my
           | point and yet resonates with me. I think there's truth to
           | what you're saying. Perhaps that "tradition" I'm seeking is,
           | in a way, obsolete, and my kids are faced with an entirely
           | different (and maybe better) problem: what to spend their
           | time on?
           | 
           | I'll add one thing this made me think of: when I was a kid,
           | my family computer and Game Boy had a fixed number of games.
           | Getting a new game was a big deal. I took it seriously and I
           | scraped the fun out of every game thoroughly. Once I got to
           | college and got some money, I ended up with the "I have ten
           | thousand Steam games and I don't feel motivated to stick with
           | any of them" problem.
           | 
           | Not totally sure it's related, but I thought of that.
        
             | blooalien wrote:
             | > ... "I ended up with the "I have ten thousand Steam games
             | and I don't feel motivated to stick with any of them"
             | problem."
             | 
             | For my own personal instance of that _exact_ problem, I 've
             | chosen to install a limited number of games at a time which
             | fit into my current set of interests in gaming, and choose
             | from among those when I feel like playing a game, until
             | each has been thoroughly explored and enjoyed to it's
             | fullest before uninstalling that one or two specific fully-
             | explored game(s) to replace with another. Works for me ...
             | YMMV.
        
             | Karrot_Kream wrote:
             | My younger sibling and I are quite far apart age wise and
             | we grew up on other sides of this divide. When I grew up,
             | families would buy VHS tapes for kids or record shows and
             | kids were known to watch the same shows over and over
             | again. Everyone in my cohort has memories of watching that
             | one movie or that one show so many times that their parents
             | were utterly sick of it (and of messing up a recording of
             | the last episode of their favorite show.) My sibling grew
             | up in the age of rental DVDs and Youtube and the idea of
             | running out of content to them was laughable.
             | 
             | But as children we were both limited by our mental
             | development, time, and attention spans. My sibling just
             | watched the same Youtube video over and over just like I
             | watched the same VHS tape over and over again. As much as
             | material conditions change, humans tend to stay the same.
        
         | jobigoud wrote:
         | Your story reminds me of a newspaper we have in France "La
         | bougie du sapeur", it's only published on February 29th.
         | 
         | https://en.wikipedia.org/wiki/La_Bougie_du_Sapeur
        
         | mirekrusin wrote:
         | I'm sure there was something nice about hunting down big animal
         | once a year and having nice feast by the fire next to the cave.
        
         | fragmede wrote:
         | Absolutely. But where scarcity has been abused for domination
         | and power over others, as in food or oil or money scarcity,
         | it's not a good thing and whatever we can do to bring those
         | systems down is, in my book, good.
        
         | allturtles wrote:
         | In addition to oversaturation, on-demand access to things can
         | also lead to a sort of paralysis: the fact that I have access
         | to shows on demand all the time means I _almost never actually
         | watch them_. After all I can always do something else (read a
         | book, browse HN) and watch that thing some other time. Whereas
         | I will watch live TV (Jeopardy, sports), because I have to turn
         | on the TV at a particular time to see it.
         | 
         | Your line of thought "rhymes" in an interesting way with Matt
         | Levine's column today about how illiquidity sometimes has value
         | [0]:
         | 
         | > Another, funnier sort of financial innovation is about
         | subtracting liquidity. If you can buy and sell something
         | whenever you want at a clearly observable market price, that is
         | efficient, sure, but it can also be annoying. Consider the
         | following financial product:
         | 
         | > 1. You give me the password to your brokerage account. > 2. I
         | change it. > 3. You can't look at your brokerage account for
         | one year, because you don't have the password. > 4. At the end
         | of the year, I give you back your password and you pay me $5.
         | 
         | > Is this a good product? For me, sure, I got $5 for like one
         | minute of work.[1] For you, I would argue, it's also pretty
         | good. For one thing, you avoid the stress of looking at your
         | brokerage account all the time and worrying when it goes down.
         | For another thing, you avoid the popular temptation of bad
         | market timing: You can't panic and sell stocks after they fall,
         | or get greedy and buy more after they rise, because I have your
         | password.
         | 
         | [0]: https://news.bloomberglaw.com/banking-law/matt-levines-
         | money...
        
         | bsenftner wrote:
         | Perhaps, finally, the persistent over emphasis on the expensive
         | yet empty production value will give way to _content composed
         | of moral catch-22 's and the richly described personalities
         | enduring such situations_, what is traditionally considered
         | _story first_ film making. No gloss is necessary when the story
         | is strong.
        
         | TeMPOraL wrote:
         | Others answered you in many interesting ways, so I'll just try
         | to address this:
         | 
         | > _Maybe I should be asking Gene Roddenberry but I 'll ask all
         | of you: do you think there's something lost by eliminating
         | scarcity?_
         | 
         | I'd like to think Gene would tell you, there's always going to
         | be a scarcity of _something_ - the adventure is in chasing it,
         | but it 's much more enjoyable when you aren't _coerced into it_
         | by the need to feed and shelter yourself and your close ones.
         | Basically, scarcity of food, healthcare, housing and
         | opportunities is bad. Scarcity of unique experiences and
         | relationships is enjoyable, and it 's always going to be there.
         | 
         | I emphasize with how you feel about your kids, but I think this
         | is less about scarcity per se, and more about your kids not
         | wanting to participate in _your_ tradition. It 's possible they
         | still might - after all, these kinds of traditions aren't
         | really about a movie, but about the time spent together.
         | Perhaps when they grow up, they'll voluntarily abstain from
         | rewatching that movie on their own, outside Halloween.
         | 
         | Personal and much more childish example: in my more naive
         | years, I've established a tradition, and roped a few friends
         | into it, of watching "V for Vendetta" on the 5th of November.
         | That was way back when the "oh fuck, the Internet is here" meme
         | was funny, and not a stuff of nightmares. It was a completely
         | random and voluntarily tradition, that lasted a couple years
         | before naturally dissolving. My point being, the availability
         | of the movie had little to do with it - it was all about
         | voluntary choice of a group of people to do something together.
        
         | AlgorithmicTime wrote:
         | Certainly something is lost by removing scarcity... at the same
         | time, once that scarcity is gone, it can't be restored short of
         | civilizational collapse. There's no way to put the streaming
         | genie back in the bottle, nor the Stable Diffusion genie.
        
           | PuddleCheese wrote:
           | Let's not conflate ethical inputs for full-out "undo" panic.
           | 
           | I don't see many people saying that we need to "un-release"
           | anything.
           | 
           | What I do see is a want for ethical considerations for
           | vulnerable parties be part of the discussion. I don't think
           | consent is too large a barrier, especially since the music-
           | focused variant of these tools currently has the developers
           | walking on egg shells to avoid aggravating people with deeper
           | pockets.
           | 
           | Instead what I see is the borderline criminalization of the
           | people who were forced into the role of gatekeeping to be
           | able to support themselves in the span of a few months
           | because someone said "We can!" and apparently didn't watch
           | Jurassic Park.
        
           | Waterluvian wrote:
           | Yeah, I think you're 100% right. There is no "should we do
           | this?" question here. Like every technology ever invented,
           | it's now here.
        
             | bugfix-66 wrote:
             | It's a bit like saying we can't stop music piracy, now that
             | Napster exists.
             | 
             |  _Napster was a peer-to-peer file sharing application. It
             | originally launched on June 1, 1999, with an emphasis on
             | digital audio file distribution. Audio songs shared on the
             | service were typically encoded in the MP3 format. It was
             | founded by Shawn Fanning, Sean Parker, and Hugo Saez
             | Contreras. As the software became popular, the company ran
             | into legal difficulties over copyright infringement. It
             | ceased operations in 2001 after losing a wave of lawsuits
             | and filed for bankruptcy in June 2002._
             | 
             | Use of the output of systems like Copilot or Stable
             | Diffusion becomes a violation of copyright. The weight
             | tensors are illegal to possess, just like it's illegal to
             | possess leaked Intel source code.
             | 
             | If you use the art in your product, on your website, etc.,
             | you risk legal action.
             | 
             | The companies that train these systems can't distribute
             | them without risking legal action. So they won't do it.
             | It's expensive to train these models.
             | 
             | It will always exist in the black-market underground, but
             | the civilized world makes it illegal.
             | 
             | That's where this is going, I hope. Best case scenario.
        
             | PuddleCheese wrote:
             | Just because it's here doesn't mean it can't be modified
             | going forward, and that we have to surrender to any and all
             | ability to regulate anything. Fatalism can be attractive if
             | you don't want to think about regulation, though, I
             | suppose.
        
               | operator-name wrote:
               | Regulation can't trump individual morals, and is driven
               | by collective ethics. Deepfakes still exist and are
               | actively being made and developed, even though many
               | platforms have regulated it. Ignoring the difficulty of
               | passing a hotly debated premise, regulating would only
               | limit the actions of those who align with the regulation
               | - as the article demonstrates even if Google are cautious
               | to release their DreamBooth model and specifics it
               | doesn't take much for someone to replicate it, ignoring
               | (or ignorant) of such concerns.
               | 
               | This is obviously something that we're going to have to
               | collectively figure out - the technology is here and the
               | technology is still being developed. Either we adapt our
               | thinking or consider it taboo. Anything else (such as
               | restricting usage to a "trusted subset") is just delaying
               | the inevitable.
        
         | lelandfe wrote:
         | I have some friends whose young children are addicted to
         | watching videos of _other_ kids opening presents on YouTube.
         | 
         | I've been trying to articulate why I reacted negatively to that
         | concept, and I think you nailed it.
        
           | Waterluvian wrote:
           | I have that exact problem here. I want my kids to have
           | latitude and freedom with what they watch and do. But their
           | child brains need adult moderation. They literally want every
           | day to be Christmas, and they do not understand how that
           | might "burn out" their synapses on things like the act of
           | waiting, being excited, and eventually getting new gifts.
        
         | kixiQu wrote:
         | https://en.wikipedia.org/wiki/The_Work_of_Art_in_the_Age_of_...
         | 
         | Valuable material in the disciplines of aesthetics and
         | philosophy of technology on this.
        
         | rm_-rf_slash wrote:
         | > do you think there's something lost by eliminating scarcity?
         | 
         | What a silly question. How does an extension of an already
         | bottomless internet of content that you couldn't consume over
         | the course of hundreds of lifetimes come anywhere remotely
         | close to ending food insecurity?
        
         | furyofantares wrote:
         | > do you think there's something lost by eliminating scarcity?
         | 
         | Certainly whatever these models can output just from text plus
         | a bit of work/skill on the prompt plus a bit of selection will
         | not be very valuable.
         | 
         | But it will see applications that don't need something very
         | valuable, and where otherwise a piece of art would be not be
         | affordable. I run a daily word puzzle game, and every daily
         | puzzle comes with an image that helps tie the theme together.
         | This is a low value piece of art that's nearly free for me to
         | make, which wouldn't exist otherwise.
         | 
         | But the other thing is that people will make scarce and
         | valuable things using generated art. Right now we're seeing
         | tons of art where the direct output of a model is posted after
         | less than an hour of work on prompts and selection by a person.
         | 
         | But when more people start combining all of these tools,
         | spending dozens of hours on an image that's made out of
         | components that are generated using AI tools, and other
         | components that are manually edited or created, I think we're
         | going to see a lot of brand new art and brand new skills.
         | 
         | There's always going to be people who pour their creativity,
         | skills, and time into something. I can't wait to see what
         | people can make when they do that while incorporating all these
         | new tools that are being created into their process.
        
           | melagonster wrote:
           | I hope we can distinguish it from other normal AI artworks, I
           | very I don't have this ability:(
        
       | 6stringmerc wrote:
       | How completely unsurprising - a person who grew up in a country
       | most famous for chop yo dolla gives two fucks about the context
       | of achieving his personal desires at the expense of those
       | actually willing to do the work. It's his own goddamn narrative.
       | I hope Disney bankrupts him as a consequence to his ignorant and
       | selfish actions.
        
       | Waterluvian wrote:
       | Does anyone else feel painfully unsure of their opinion on all of
       | this? I honestly don't recall the last major thing I've felt this
       | completely uncertain about. All my opinions generally lean in one
       | direction at least a little bit.
       | 
       | On one hand, I think it might be ridiculous for an artist to get
       | to "own" a "style" of art. In the first example on this page,
       | none of the art looks plagiarized. It looks like what every
       | artist has done: been inspired by or borrowed ideas from other
       | sources.
       | 
       | But on the other hand, if left unchecked, this will further harm
       | our creative industries. We're going to be starving out our
       | artists because robots can generate art _far_ more easily than
       | they can. If this continues, it disincentivizes anyone from
       | trying the already very uphill battle of making a living by
       | creating art. One might say, "capitalism, baby! we don't need
       | those artists, because we have AI and look at what it can do in
       | seconds!" But I think that even if AI can "discover" new art
       | styles and trends, there's something lost by humans not doing it.
       | 
       | I don't think AI will be able to replace human creativity for
       | discovering new paradigms as fast as it will replace human
       | application of existing paradigms. And by doing the latter really
       | well with AI, we're killing our ability to do the former. We'll
       | end up with a sterile art trajectory.
       | 
       | I guess my uncertainty is: something about this _feels wrong_ and
       | yet I cannot point to any one moral/ethical thing that feels
       | wrong about it.
        
         | shadowgovt wrote:
         | Oh, agreed. This is Brave New World territory.
         | 
         | My suggestion is to accept it as a thing that will be here and
         | tune our expectations appropriately. Because if it is made
         | illegal, it will be one of those things that's illegal-but-
         | omnipresent, like sharing music on BitTorrent... The Western
         | copyright regime doesn't blanket the world, and the advantages
         | of these tools are so big that places it doesn't reach will
         | just use them. The fact that this story is about a Nigerian
         | engineer in Canada using software developed in San Francisco
         | running on some computers in Northern Virginia to ape the
         | artistic style of an artist from LA, none of these parties
         | having ever met each other, indicates how empty the bottle is
         | the genie used to live in.
        
         | PuddleCheese wrote:
         | Artists don't generally try to OWN styles or prevent others
         | from using them. They are the result of years of training,
         | adapting, etc. They put their own spin on it. It's effectively
         | a brand of that artist, and it's generally beholden to a sort
         | of "honor" code that you'll likely get called out for breaching
         | if you're flagrantly trying to pass it off as your own.
         | 
         | The core issue illustrating this is when people use an artist
         | name in a prompt. If these models did not exist, if you wanted
         | something in that style, you would likely be reaching out to
         | that individual, or asking someone else to try to emulate
         | someone. In that instance, the emulation is generally
         | accountable. In these instances, there is no accountability
         | towards the algorithm, as it's not making creative choices, to
         | say nothing of moral or ethical ones. That was done by the
         | individuals with venture-capital backing, using research
         | loopholes to fund the legally questionable scraping of this
         | data in the first place, which in some instances, violates the
         | EULAs of the sites they were scraped from.
         | 
         | At the end of the day, these artists, styles, etc. would not
         | exist without the artists who had no say in their
         | "democratizing art".
        
         | MathYouF wrote:
         | > I don't think AI will be able to replace human creativity for
         | discovering new paradigms as fast as it will replace human
         | application of existing paradigms. And by doing the latter
         | really well with AI, we're killing our ability to do the
         | former. We'll end up with a sterile art trajectory.
         | 
         | This may actually end up making the few artists creative enough
         | to create bold new art styles even more valuable, if they can
         | basically not release their art and hide it behind a model.
         | 
         | Though I guess anyone with access to that model's output could
         | then just generate a few samples and train on those, so maybe
         | not.
        
         | mdaEyebot wrote:
        
         | Invictus0 wrote:
         | It will change the landscape of the art market, but it won't
         | destroy it. Digital art will be less valuable, canvases and
         | sculptures will become more valuable.
        
           | MathYouF wrote:
           | The cost of materials and transportation and time using the
           | expensive CNC machine will be the major costs of sculpture.
           | Generating the same quality 3D models is at the very furthest
           | 18 months away. And animating and rigging the models and
           | giving them auto-generated RL policies will surely come very
           | quickly next.
        
       | fleddr wrote:
       | I'm thinking a little bit of empathy doesn't hurt. Reason from
       | Hollie's point of view. She didn't ask for this and was working
       | on cool stuff:
       | 
       | https://holliemengert.com/
       | 
       | Next, somebody grabs her work (copyrighted by the clients she
       | works for), without permission. Then goes on to try and create an
       | AI version of her style. When confronted, the guy's like: "meh,
       | ah well".
       | 
       | Doesn't matter if it's legal or not, it's careless and plain
       | rude. Meanwhile, Hollie is quite cool-headed and reasonable about
       | it. Not aggressive, not threatening to sue, just expressing
       | civilized dislike, which is as reasonable as it gets.
       | 
       | Next, she gets to see her name on the orange site, reading things
       | like "style is bad and too generic", a wide series of cold-
       | hearted legal arguments and "get out of the way of progress".
       | 
       | How wonderful. Maybe consider that there's a human being on the
       | other end? Here she is:
       | 
       | https://www.youtube.com/watch?v=XWiwZLJVwi4
       | 
       | A kind and creative soul, which apparently is now worth 2 hours
       | of GPU time.
       | 
       | I too believe AI art is inevitable and cannot be stopped at this
       | point. Doesn't mean we have to be so ruthless about it.
        
       | makz wrote:
       | Call me Luddite but this has gone too far, please stop it
       | already.
        
         | irrational wrote:
         | How can it be stopped? Seriously, is there any way to put the
         | open source genie back into the bottle?
        
       | swordsmith wrote:
       | So with these generative models running rampant, what's to even
       | motivate aspiring artists to develop and hone their craft, if
       | their years of work can be copied so easily?
       | 
       | Maybe it doesn't practically matter, because some art style
       | generative model can be developed and feed into the diffusion
       | model so it can generate art of new styles.
        
       | lurquer wrote:
       | One way to consider copyright: when the concept gained traction
       | 16th and 17th centuries, the doctrine existed to protect the
       | author's ability to get someone to finance the printing of his
       | work. That is reproduction was costly and required an investment
       | by a printer. The printer faced more economic harm than the
       | author if he spent $20,000 typesetting and printing a thousand
       | copies of something only to find a competitor beat him to the
       | market. To be clear, the copyright attaches to the author, but it
       | is only valuable because it enables the author to induce a
       | printer to publish his work.
       | 
       | Now, as the costs of publishing have dropped to zero, the concept
       | is beginning to make less sense.
        
       | [deleted]
        
       | [deleted]
        
       | s1artibartfast wrote:
       | The Supreme Court of the US is currently deciding if Andy
       | Warhol's orange prince[1] violates copyright.
       | 
       | It is essentially a cropped photo, defeatured, plus the color
       | orange.
       | 
       | This could be done easily without the help of AI at all, just
       | cropping and filtering.
       | 
       | The case is very interesting and grapples with the same
       | questions.I highly recommended listening to the oral
       | arguments.[2]
       | 
       | I doubt the Andy Warhol Foundation will prevail, but it raises
       | all these same questions, without the AI: What constitutes a
       | transformative use of prior work?
       | 
       | Can you imbue existing art with new ideas and make it your own?
       | 
       | https://en.m.wikipedia.org/wiki/Orange_Prince_(1984)
       | 
       | https://www.oyez.org/cases/2022/21-869
        
       | [deleted]
        
       | VBprogrammer wrote:
       | I don't really have a stance on the moral or ethical points but
       | some of the results in the illustrations included here are
       | amazing. If you mixed them up and asked me to identify those
       | which were originals and those which where AI generated I would
       | fail miserably. That in amazing in my book.
        
       | Imnimo wrote:
       | I feel like there are three issues at play here:
       | 
       | -Using her name to describe/advertise the fine-tuned model.
       | 
       | -Using her illustrations to fine-tune the model.
       | 
       | -Using a larger body of potentially unlicensed images to train
       | the base model.
       | 
       | For the first, if we had decided that the other steps were fair
       | use or whatever, would it be better or worse if the fine-tuned
       | model had been made available with no mention of the identity of
       | the author of the training images? I'm not sure.
       | 
       | For the second, there is surely a limit after which this sort of
       | thing becomes unambiguously unacceptable. Suppose you fine-tune
       | so aggressively on a small dataset that eventually the model
       | simply reproduces exactly the training images. Now you're
       | obviously violating copyright. But where exactly is line before
       | that? If I have a base model that was trained on fully licensed
       | images, and I make one single gradient descent step using a
       | copyrighted image, making imperceptible changes to the model's
       | output, surely the resulting images not suddenly in violation. It
       | seems to me that the standard should be that if a human were to
       | draw the output by hand after looking at the training images,
       | would we consider it a violation of copyright? As a thought
       | experiment, imagine someone who lacks the ability to draw but can
       | instead hand-write the weights of a neural network to produce the
       | desired output - it shouldn't matter which process they use.
       | 
       | For the third, what if I spent a long time prompt engineering on
       | a model trained entirely on properly licensed data and was able
       | to generate a prompt format that produced the outputs we see from
       | this fine-tuned model? In other words, for any generative model,
       | there is a space of reachable outputs, and it's not so clear that
       | these images did not already lie in that space before fine-
       | tuning.
        
         | krinchan wrote:
         | I mean, they're literally out here training models on Disney's
         | copyrighted works. People seem to miss the point that the
         | copyright ambiguity can work both ways and the courts are
         | almost always on the side of corporate copyright holders.
         | 
         | I posit with a few good NSFW scandals so the legislation can be
         | dressed up as protecting the children, Disney will get the
         | legislative intervention it wants re: copyright and stable
         | diffusion.
         | 
         | I realize that seems pretty US-centric but it's surprising the
         | ways Disney can reach internationally to protect it's IP. It'll
         | be interesting to see if Stable Diffusion ends up relegated to
         | the parts of the Internet outside of the big three
         | "jurisdictions" so to speak: US, EU, and China.
        
         | lbotos wrote:
         | > But where exactly is line before that?
         | 
         | Unfortunately the line of "fair use" is very blurry and only
         | gets clarified in specific instances, with lawyers and humans
         | debating if there was damage done.
         | 
         | It's extremely frustrating and these advances are going to
         | pressure fair use for sure.
         | 
         | People will generate infringing work with AI and people will
         | generate newly copyrightable work with AI. The danger is, as an
         | artist, it's hard to trust if the model has given you something
         | that is copyrighted as _you_ have not necessarily seen it
         | before. (You the human can cite your references, and it 's
         | possible you were subliminally influenced, but with SD, generic
         | prompts may give you "copyrighted details" that you don't
         | know.)
        
           | Imnimo wrote:
           | Yeah, definitely. It's more starkly clear with Copilot, where
           | you can point to verbatim reproduction of code. And it
           | probably helps Stable Diffusion that images are fuzzier, so
           | you're less likely to generate something that's close enough
           | to be a copyright violation, but there's really no way to be
           | certain that your output is not a memorized training example,
           | even if it's statistically very unlikely.
        
         | theptip wrote:
         | I think this is the right way of chopping the problem space. In
         | this case, the artist expressed a preference to not have her
         | name used, which seems reasonable, and the author renamed the
         | repo to accommodate that.
         | 
         | One could easily imagine the opposite scenario, where the
         | artist objects to not being credited.
         | 
         | We are in a weird stage where the indexes into this style space
         | are quite weird and ad-hoc; look at all the prompt engineering
         | black magic. It's worth noting though that this problem only
         | showed up because there weren't many examples of this style in
         | the data set; when every frame of every Disney animation is in
         | there, it won't need you to refer to one artist's name.
         | 
         | I do wonder if a PR and UI improvement would be to layer a
         | language model on top to build the prompt, or perhaps even bake
         | this into the model itself, so you can use more generic (and
         | possibly iterative) style descriptions instead of having to
         | refer to an artist by name. Basically use AI to solve the
         | prompt engineering problem too.
        
         | doodlebugging wrote:
         | > In other words, for any generative model, there is a space of
         | reachable outputs, and it's not so clear that these images did
         | not already lie in that space before fine-tuning.
         | 
         | I'm trying to understand what you are saying here. It sounds
         | like you are saying that something that doesn't yet exist must
         | already exist simply because a model has been created which can
         | yield this non-existent thing as an output?
         | 
         | Your example uses properly licensed data as an input but seems
         | to imply that creating a model that is capable of producing a
         | particular output means that the output in effect already
         | exists as a prior work before it has ever been created by
         | virtue of the model's ability to create it on command.
         | 
         | I'm probably over-thinking all this.
        
           | Imnimo wrote:
           | Well, at some point it's like the monkeys writing
           | shakespeare. Instead of a complicated neural network, we
           | could have written a program that just outputs random pixel
           | values. We'd definitely be able to find all of the outputs of
           | Stable Diffusion, as well as all of the works of the original
           | artists coming out of that program. It'd just take us a lot
           | of waiting and watching.
           | 
           | I don't know that that's enough to constitute "prior art" in
           | any meaningful sense, though.
           | 
           | The way I look at it is that the function of Stable Diffusion
           | or any other diffusion model is to pare down the output space
           | of the "random pixel machine". It learns which regions of
           | image space are likely and which are unlikely, and so when
           | you sample an image, you tend to get ones that people like
           | rather than random noise.
           | 
           | You could imagine an idealized diffusion process whose output
           | space is truly continuous and which assigns non-zero
           | probability to the entire image space (all possible pixel
           | arrangements), with higher probability assigned to "good"
           | regions, and lower probability assigned to "bad" regions. If
           | I sample from such a model repeatedly, I will eventually (in
           | the mathematical sense of probability 1 in the limit) get out
           | an image that looks exactly like a Hollie Mengert
           | illustration, even if the model has never seen one during
           | training. I'll even eventually get out an image that looks
           | exactly like an illustration that an unborn artist will
           | create in the year 2047.
           | 
           | Now, in practice, it's a little less clear. Are there regions
           | of the image space that Stable Diffusion assigns exactly zero
           | probability, such that no amount of sampling will ever
           | generate that image? Are there real, "good" images that
           | Stable Diffusion assigns very low probability such that
           | generating them is no better than sampling from the "random
           | pixel machine"?
        
             | doodlebugging wrote:
             | >you tend to get ones that people like rather than random
             | noise.
             | 
             | And you have to consider that your idea of and tastes in
             | "art" may be quite different from mine. People actually buy
             | and display canvas with random bullshit colors or simple
             | shapes or even lines on them that to me, is not art in the
             | pure sense but is instead just the output of a lazy person
             | marketed to dweebs who need something different for their
             | living rooms.
             | 
             | Back to the original discussion - if your earlier point is
             | valid, I don't agree myself, then the only one who owns any
             | art, product, process, etc is the one who designed that
             | model that can be used to create it even if the model was
             | created years after the thing they created. None of the
             | people who actually created the product or process, etc own
             | anything as a result of their efforts. That is wrong.
             | 
             | It is a bit hand-wavey and bullshit mystic like the dumbass
             | marketing used by some sculptors who claim they didn't
             | really do anything except remove all the wood, stone, clay,
             | etc that was hiding the sculpture they created. It was
             | always right there carefully concealed in the tree trunk
             | and anyone who dinked around with that trunk would've ended
             | up creating the same sculpture under that logic. LOL. Jesus
             | put it there, I'm only the guy picked to uncover it.
             | 
             | >Are there real, "good" images that Stable Diffusion
             | assigns very low probability such that generating them is
             | no better than sampling from the "random pixel machine"?
             | 
             | That depends on your definition of "good". Do you mean
             | "good enough" that an observer will see a resemblance or is
             | it something else? I would think that the output from any
             | model will always have an upper limit to what it can
             | reproduce that is related to the properties of the combined
             | inputs to the model. With more inputs one should see higher
             | precision outputs. Like you say though, Stable Diffusion
             | probably designates some outputs as near zero probability
             | because the input set supports that conclusion for the
             | requested output.
        
               | Imnimo wrote:
               | >I would think that the output from any model will always
               | have an upper limit to what it can reproduce that is
               | related to the properties of the combined inputs to the
               | model.
               | 
               | I look at it from the other direction. A diffusion model,
               | at its heart, is a function that takes an image
               | (initially random noise), and produce a slightly "better"
               | image, according to its learned concept of "better". You
               | turn this crank over and over and out comes a good image.
               | But the simplest diffusion model of all is the identity
               | function. You feed it random pixels and it outputs them
               | unchanged. That model - the "random pixel machine" - can
               | trivially output any possible image. The training process
               | is paring down its output space to produce more outputs
               | that are like the training images, and fewer outputs that
               | are unlike them.
               | 
               | So it's not the "upper limit" that training is
               | addressing. Creating a model with an upper limit that
               | includes all the great works of art humanity will ever
               | create is trivial. It's the "lower limit" that's the
               | problem - if those outputs are lost in a sea of
               | uninteresting noise, you'll never find them. Training is
               | the process of raising that lower limit.
        
               | doodlebugging wrote:
               | >according to its learned concept of "better".
               | 
               | I'm gonna assume that all the input images fed to the
               | process form it's "learned concept of better" (LCoB) so
               | that in effect there is nothing random about the outputs.
               | Indeed, each successive output becomes an optimization of
               | the model fit to the "LCoB". Following that, it may start
               | with an initial output that strongly resembles random
               | noise but inside that first output will be a non-random
               | component that initially fits some part of the LCoB model
               | that it is trying to achieve.
               | 
               | Also following that, the lower limit of the process is
               | the output of the first step in the process since each
               | successive iteration is an optimization towards a model.
               | You already have the model noise floor when you run the
               | first step. Struggling to take that lower is not smart
               | and could be accomplished simply by excluding some part
               | of the collection of images used to form it's LCoB.
               | 
               | Does that make sense?
               | 
               | There is nothing random about any of the outputs of this
               | if it is model-driven. Stable Diffusion output is model
               | driven therefore it is not random at any stage.
               | 
               | In geophysical processing we have to carefully monitor
               | outputs from each process to make sure that processing
               | artifacts from mathematical operations can not create
               | remnant waveforms that can be mistaken for geological
               | data that could be used as a basis for drilling an
               | expensive well. Data-creata is a real thing. Models are
               | used.
               | 
               | Thanks for the discussion.
        
               | Imnimo wrote:
               | So the simplified view of a diffusion model is like the
               | following (this leaves out the role of the prompt):
               | 
               | -Sample random noise.
               | 
               | -Ask the neural network "Here is a noisy image. What do
               | you think it looked like before I added all this noise?"
               | (note that you did not form this image by adding random
               | noise to an initial image)
               | 
               | -Adjust the pixels of your random noise towards what the
               | network said.
               | 
               | -Repeat until there is no noise left.
               | 
               | During training, we take a training image and add noise
               | to it. This way we know the "correct" (in scare quotes
               | because there are many possible clean images that lead to
               | the same noisy image given different realizations of the
               | noise) answer to the question in step 2. This is used to
               | update the weights of the neural network.
               | 
               | Ultimately, a diffusion model is just a denoiser. A
               | denoiser implicitly represents an underlying distribution
               | of clean data. The diffusion process used to sample from
               | the diffusion model is a clever way of drawing samples
               | from that underlying distribution given access only to
               | the denoiser.
               | 
               | At sampling time, we have no training image that we add
               | noise to. We just sample random noise out of thin air.
               | This works because in the limit of large amounts of
               | noise, the distribution of "initial image plus lots of
               | noise" and "just lots of noise" are the same. You can
               | certainly draw an analogy between this initial random
               | noise and the uncarved block of marble that the sculptor
               | says "contains" a sculpture waiting to be uncovered.
               | Given the same noise, the neural network is deterministic
               | - it will always produce the same output.
               | 
               | You could even imagine an oracle who can unwind the
               | process of the neural network being cranked and tell us
               | exactly what initial noise sample would produce any
               | desired output. Just like an artist might just draw the
               | image that they want rather than waiting for the random
               | pixel machine to output it, the oracle could simply set
               | the precise values of Stable Diffusion's noise input to
               | produce a Hollie Mengert work, rather than sampling
               | repeatedly until it found one.
        
               | doodlebugging wrote:
               | I think that we just described the same process.
               | 
               | >-Sample random noise.
               | 
               | >-Ask ... ...what the network said.
               | 
               | >-Repeat until there is no noise left.
               | 
               | Initially we have noise (random or not doesn't matter)
               | and we are trying to find inside that noise a matching
               | function to an image that we already have, our model or
               | training image. That image may or may not also contain
               | some residual noise and as you describe in your iterative
               | steps, noise is added to it thus decreasing the signal to
               | noise ratio, and the neural network compares the output
               | of it's initial or updated image to the "correct answer"
               | image and weights the next iteration accordingly so that
               | a better match can be obtained. The function itself
               | consists of an optimization process designed to minimize
               | the residual noise (a de-noiser like you say) between the
               | most recent image and our target image.
               | 
               | When you say that you begin with a sample of random noise
               | created by your array populating function which is
               | assumed to be random but that you have no training image
               | to which you are adding noise, that fits the whole
               | process but it ignores that you are using the random
               | noise image to iteratively produce an output that fits a
               | model image created by compiling statistics from multiple
               | images to create a target output that is assumed to be
               | near noiseless or to have very high signal to noise
               | ratio.
               | 
               | If you are trying to use this to "find" Alfred E. Neuman
               | in your output the process needs to know what Alfred E.
               | Neuman looks like so it can effectively optimize to that
               | result. Each iteration denoises based on the known output
               | in the model created from the images that it must ingest
               | in order to build the model. If you only have a few
               | images of Alfred E. Neuman but you have thousands of
               | Homer Simpson in your model input dataset, then you will
               | have to fight through the tendency of the process to
               | converge on Homer Simpson. No matter, you always have a
               | priori information that is used to verify the integrity
               | of the output. The input is irrelevant whether it is
               | random or not since you are looking at an optimization
               | process that matches, denoises, and weights iteratively
               | until it minimizes an error function in the match and can
               | be said to be an optimum, or good match.
               | 
               | This is not particularly new or novel or anything else.
               | It is a typical iterative modeling exercise like those
               | that have been used for decades but now you have the
               | compute power to build a near noise-free target model
               | that fits the known data from every source at your
               | disposal.
               | 
               | The user who created the Hollie Mengert styled outputs
               | could not have hit that target without using a model that
               | was designed to create or mimic that type of output. That
               | is why he chose to use her work in his process. He liked
               | it. Then when he found out that she was not pleased about
               | not being consulted and that she didn't have the rights
               | to use some of those images then I think he had a come-
               | to-Jesus moment that ultimately led him to rename it so
               | he could feel better about it. Guilt-tripped him.
               | 
               | Anyway, ethics should be a required part of every
               | computer science curriculum especially when private
               | personal information is involved.
               | 
               | I'm in the oil and gas industry. It sucks sometimes.
               | Fortunately there has been a push to include or require
               | ethics training. Maybe one day it will clean up that
               | industry. I'm only holding my breath though when I pass a
               | refinery.
        
               | Imnimo wrote:
               | >The user who created the Hollie Mengert styled outputs
               | could not have hit that target without using a model that
               | was designed to create or mimic that type of output.
               | 
               | This is the part that I'm not so sure about. The value of
               | fine-tuning on Hollie Mengert's work is not so much that
               | it enables the model to create that type of output, it's
               | that it makes it far less likely to create other types of
               | outputs. It narrows down the haystack, but it doesn't
               | create the needle.
               | 
               | Similarly, if I set out to find Alfred E. Neuman, but my
               | training data has no images from MAD magazine due to
               | licensing concerns, will it be possible? It may not be
               | possible to use the prompt "Alfred E. Neuman", but maybe
               | it's possible to use the prompt "A cartoon drawing of a
               | grinning red-headed boy with a gap in his front teeth".
               | Images recognizable as Alfred are likely still in the
               | model's output space, even if they are not so easily
               | found. They are certainly in the output space of the
               | "random pixel machine". It's just a question of how hard
               | they are to find.
        
         | s1artibartfast wrote:
         | Relevant to #2 https://news.ycombinator.com/item?id=33424817
        
       | randyrand wrote:
       | A great artist can take these same prompts, look at the same
       | input images, and produce the same or even better results.
       | 
       | No one would call that stealing. Copying, yes. Stealing, No. A
       | great artist can copy other artist's style's expertly. A large
       | part of being an art student is learning how to do exactly this.
       | 
       | Stable Diffusion is just a truly great artist.
        
       | r_murphey wrote:
       | I have been a member of a local musicians union for more decades
       | than I care to admit. If someone was offering to replace me with
       | an AI after a career of dedication to the art, I would worry
       | there is a real risk of incidents of devastating loss of income
       | if there is no financial cushion. Even if I had not lost any gigs
       | yet, I would call a couple of the local union board members to
       | discuss it, and I believe this is a concrete situation which the
       | local board would take up for discussion.
       | 
       | I'm sure there are good things to come out of AI in the arts,
       | especially if it becomes a tool for the artist. But offering to
       | put a financially struggling artist out of work with low effort,
       | even temporarily, is a nightmare for the artist. Guilds and
       | unions [0] [1] have talented artists on boards of directors and
       | lawyers on staff who can help them codify some of the issues into
       | their standard contract used by member artists. I have seen bands
       | fired with no notice when they had used a standard union
       | contract. The contract and access to the Union's attorney is the
       | only thing that protected them from loss work and not making rent
       | that month.
       | 
       | [0] https://graphicartistsguild.org/ [1]
       | https://www.usa829.org/About-Our-Union/Categories-Crafts#Sce...
        
       | irrational wrote:
       | Why would I want to spend years developing my own art style if I
       | know that it can be copied and used by one of these AI engines in
       | a few hours? Are we going to end up with less human created
       | artwork?
        
       | [deleted]
        
       | bergenty wrote:
       | So? It's not a copy of her work, just her style. I see no
       | problems here.
        
         | PuddleCheese wrote:
         | 1. I like this style of work. I cannot make this style of work.
         | I will pay this artist to produce this style of work.
         | 
         | 2. I like this style of work. I cannot make this style of work.
         | I will not pay the artist. I will instead take copyrighted
         | material, and process it into a result using a machine that is
         | not able to ethically refuse.
         | 
         | 2.1. I like this style of work. I cannot make this style of
         | work. I will not pay the artist. I will use this model someone
         | else made to produce this style of work.
        
         | randyrand wrote:
         | Calling it "her" style is part of the problem. "The style she
         | uses" would be more appropriate.
         | 
         | ofc that takes longer to say so it will never happen.
        
       | Yizahi wrote:
       | I foresee super quick closure of almost all graphic artists in a
       | closed and heavily moderated pay per view/use communities. Or
       | they will simply starve. This so called A"I" producing a ton a
       | derivative art will mess up a lot of industries.
        
         | ronsor wrote:
         | If they stay in a closed pay-per-view environment, I have a
         | feeling most of them will definitely starve then.
         | 
         | Technology has messed up industries before. Everyone's not
         | going to die.
        
           | TMWNN wrote:
           | I believe Yizahi is talking about pornographic furries art,
           | which is (from what I understand) a _very_ lucrative market
           | for artists (regardless of their personal disgust of the
           | subject).
        
           | Yizahi wrote:
           | We are observing a live example in the OP right now. If you
           | are an artist working in some specific style, then even a
           | small dataset (just 30 images in the OP) will render future
           | works much less desired. Of course big companies have
           | compliance rules, reputation and so on, so they won't use
           | "gray" art in the production. But the is definitely not
           | enough bigcorp contracts for every artist, so smaller less
           | known ones would compete basically with themselves. Imagine
           | someone choosing between paying for original art X thousands
           | of dollars, or generating 80% similar in quality art for
           | "free".
           | 
           | PS: after posting my comment above I've realised that by idea
           | won't work. Art will become public sooner or later, just scan
           | physical medium, or rip drm off the digital one and you will
           | get the dataset for NN generator. Well, it will be a huge
           | mess. We will see how it will turn out.
        
         | PuddleCheese wrote:
         | I mean, the alternative is arguably that artists become
         | indentured servants to scraping/training models that benefit
         | the people who are doing the scraping/training, at the arguable
         | expense of the artist.
        
       | jarrell_mark wrote:
       | Artist using Disney trademarks without their permission has style
       | taken without their permission
        
       | spicyusername wrote:
       | It really does feel like the headline for 1970 - 2070 is going to
       | be "Technology desperately tries, but fails, to make the world a
       | better place".
       | 
       | What technology has _really_ improved what it is like to be a
       | human being, around other human beings, and to lead a fulfilling
       | life?
       | 
       | Every "disrupted" industry feels like it just has been replaced
       | by a less-human, more unequal, and more dystopian version of
       | itself over the past 50 years.
       | 
       | Those who can survive tech jobs seem to have turned out to be the
       | least equipped people to properly navigate us towards utopia.
        
         | JohnJamesRambo wrote:
         | My Roomba is one of the few pieces of technology I own with no
         | downsides to my life. I often think of it as an example of
         | technology done right.
        
           | wlesieutre wrote:
           | There's been lots of great technological advancements in
           | household stuff. Cordless tools come to mind as another.
           | 
           | I think it's when you put a screen on anything that it starts
           | the downhill slide.
        
           | ROTMetro wrote:
           | https://www.vice.com/en/article/y3pp8y/amazon-buys-roomba-
           | co...
        
             | JohnJamesRambo wrote:
             | Lol, well I've got the old dumb one that you just push a
             | button on to clean. His boundaries are set by my little
             | fence pod things that run on D batteries which I have to
             | push the button on too.
        
         | Kiro wrote:
         | What is a fulfilling life? I would say most technology I'm
         | using has given me a more fulfilling life.
        
         | lelandfe wrote:
         | Group video chat. Allowed my family, spread across the world,
         | to all say goodbye to my ailing grandfather before he passed.
         | 
         | And, on a happier note, allows me to stay in touch with dear
         | friends on the other side of the globe. Emails and co. just are
         | not the same.
        
         | friend_and_foe wrote:
         | This is just negativity bias. All the good stuff is forgotten
         | because you don't have to think about it. Go to 1969 and you'll
         | miss a ton of stuff.
         | 
         | I'd say the fact that I can get any book I want and be reading
         | it within 5 minutes without even having to get out of bed is
         | amazing. That tops it for me.
        
       | btilly wrote:
       | It is worth asking, what legal rights of the author's may have
       | been violated here?
       | 
       | Glancing at
       | https://cyber.harvard.edu/property/library/moralprimer.html the
       | following ending sentence seems particularly applicable:
       | 
       |  _If a person uses the identity of an author, or the works of the
       | author, for her own benefit without the author 's permission,
       | then she may have violated the author's right of publicity or may
       | be guilty of misappropriation of the author's work._
       | 
       | And the previous line is nearly applicable:
       | 
       |  _If authorship of a work is attributed to an author against her
       | will, or misattributed, the author may have a state action for
       | defamation against the person responsible for the attribution._
       | 
       | Of course "moral rights" are particularly weak in the USA. I'm
       | sure that there would be a much better case in the EU.
       | 
       | Of course this gets directly to the question of what happens when
       | laws conflict with technology. People in technology generally
       | think that technology should win. People who benefit from the
       | laws think that the laws should win. Both popular opinion and
       | real world results generally wind up somewhere between.
        
         | polotics wrote:
         | The legal right that's been violated is copyright: copies of
         | her work were taken and fed into the ML model. This was done
         | without the picture owner's approval. Moral right may be weak
         | in the USA, but copyright ain't. Most likely making the owners
         | whole is going to be an expensive proposition at some point.
        
           | btilly wrote:
           | Copyright law is the right to control a set of specific kinds
           | of actions. A complete list of which ones is at
           | https://www.copyright.gov/what-is-copyright/. "Inputing the
           | work to a computer program" is not in the list of actions.
           | Whether or not the result of running that program may violate
           | copyright is a question of fact based on what the program
           | does.
           | 
           | As for what it does, my non-lawyerly opinion is that it is no
           | different than a human artist looking at paintings then
           | imitating the style. Which is very much legal.
           | 
           | That said, a case can be made for derivative works. I don't
           | think it is a very good case, but a case can be made for it.
        
             | [deleted]
        
           | i_like_apis wrote:
           | Training on copyrighted material is legal, and rightly so.
        
             | polotics wrote:
             | Are you sure? Has this been tested in court? As you write:
             | "rightly so", may I ask if you are a judge, lawyer? Has
             | this decision of yours been appealed, gone to a higher
             | court? Does the word "training" as applied to a computing
             | system not simply mean "copying"? Who decides this?
        
               | i_like_apis wrote:
               | Copyright is still about outputs instead of inputs, just
               | like it is with human learners.
               | 
               | Training isn't copying. Its an input. It's akin to
               | "seeing" or "reading".
               | 
               | Any judge needs to think in terms of outputs. If a system
               | outputs something that violates copyright, there's a
               | problem.
               | 
               | But attempting to regulate inputs can't work. For
               | instance, it will be impractical/impossible when we have
               | agents moving around in the real or virtual world, how
               | are they supposed to know when they should turn their
               | sensors off so they don't see copyrighted material.
        
         | praptak wrote:
         | Things algorithmically generated from a copyrighted work
         | constitute derived work.
         | 
         | Obviously if the thing is as complex as an AI model it might be
         | hard to prove that a copyrighted work was among the inputs.
        
           | lowbloodsugar wrote:
           | This is certainly the legal question of the hour, and it may
           | allow courts to sidestep the equivalence problem to apply
           | different rules to humans than to AIs. History is replete
           | with courts deciding that the rights of some minority don't
           | count because they are deemed different (or less) in some
           | way. Might as well get started with AIs right away. Might as
           | well make sure that when AI is eventually demonstrably
           | sentient there are already tons of established laws and
           | billions of dollars that says they are not.
           | 
           | It seems to me that you cannot take someone's art and plug it
           | into an empty model: we have to start with a model that has
           | been trained on a huge corpus of art, and only then can you
           | show it some pictures and ask it to imitate. This is no
           | different than a human. But my opinion does not matter at
           | all. All it requires is that a jury says "This is an
           | algorithm, therefore it is derivative."
        
           | cthalupa wrote:
           | >Things algorithmically generated from a copyrighted work
           | constitute derived work.
           | 
           | This is a very strong statement that is not, at least to me,
           | obviously true. I am curious what your argument is - learning
           | from a copyrighted work does not automatically make a work
           | derivative.
           | 
           | These models do not store copies of images they learned from,
           | or attempt to replicate these images. They learn about
           | constituent parts and assemble them based on the prompt,
           | which is not conceptually all that different from how humans
           | do the same thing.
           | 
           | There are obviously moral questions and legal questions
           | around AI art, and I expect that we will see more, but I'm
           | not sure that this statement is accurate.
        
           | benlivengood wrote:
           | It's a derived work but also transformative and thus almost
           | certainly fair use.
        
         | i_like_apis wrote:
         | So, if we remove her name and call this style #27b-6, there's
         | no issue.
         | 
         | She does not own the style. She owns her name and can get fussy
         | about how it's used, but you can't stop someone from learning
         | your style.
        
           | WaxProlix wrote:
           | The relevant part is probably
           | 
           | > ...works of the author, for her own benefit without the
           | author's permission
           | 
           | which did happen, in the training and fine-tuning process.
        
             | i_like_apis wrote:
             | Training on copyrighted material is legal. As it should be,
             | IMO.
        
               | jfk13 wrote:
               | Is that established law (in what jurisdiction?), or just
               | your opinion?
        
               | i_like_apis wrote:
               | I'm the EU it's established law. In the US it's basically
               | true but it will be playing out in the courts a little in
               | the future.
        
               | btilly wrote:
               | I am pretty sure that it is not established law, but I am
               | pretty sure that that is how it will work out. US
               | provisions for fair use make training models likely OK,
               | and the EU is carving out exemptions for it. See
               | https://valohai.com/blog/copyright-laws-and-machine-
               | learning... for more.
               | 
               | The question of whether the output of the model itself
               | counts as a derivative work, though, is rather more
               | complex. In the case of Github Copilot it has proven very
               | adept at spitting out large chunks of clearly copyrighted
               | code with no warning that it has done so. And lawsuits
               | are being filed over this.
               | 
               | But in the case of the visual artwork, I'm pretty sure
               | that it is going to be ruled not derivative. Because
               | while it is imitative, you cannot produce anything that
               | anyone can say is a copy of X.
               | 
               | But as ML continues to operate, we'll get cases that are
               | ever closer to the arbitrary line we are trying to
               | maintain about what is and is not a copyright violation.
               | And I'm sure that any criteria that the courts try to put
               | down is not going to age well.
        
               | zuminator wrote:
               | In the music industry, even the tiniest sound sample used
               | in a work entitles the original creator to compensation.
               | It might come to pass that using any portion of an
               | author's work in your training data will confer certain
               | rights to the author over the AI generated product.
               | You'll have to perpetually keep records of your training
               | data for commercially available work, lest you be sued. A
               | whole bureaucracy will evolve, a Getty Curated Training
               | Data, public domain training data sets. Basically the
               | same sorts of issues that we've had over the past 30
               | years, except replacing "internet" with "AI."
               | 
               | And if the past is any guide, the forces of capital will
               | prevail commercially, but after aborted attempts to rein
               | them in with lawsuits, hobbyists and kids on social media
               | will be mostly ignored by rights holders.
        
               | LastTrain wrote:
               | We don't know that yet.
        
         | simonw wrote:
         | As always with generative AI, the legalality is far less
         | interesting than the morality.
        
           | btilly wrote:
           | The problem with discussing morality is that we each bring
           | our own moral systems to bear, then use emotionally charged
           | words like "right" and "wrong". This leads to people at first
           | disagreeing, and then arguing past each other. Doubly so
           | because you feel like you have made a real point when you
           | agree with yourself, while the other person feels like you
           | haven't. And vice versa.
           | 
           | As a result I only want to discuss morality with people who
           | either largely agree with me, or who are able to take a step
           | back from their own moral system to discuss what someone else
           | from their moral system might think.
           | 
           | By contrast laws and human behavior are external and hence
           | easier for people to agree on. Yes, they are dissatisfying
           | because none of us entirely agree with the law, and laws are
           | deliberately vague in certain things. But I find that
           | discussions of them tend to actually work out better.
        
       | bugfix-66 wrote:
       | Systems like Copilot and Dall-E and so on turn their training
       | data into anonymous common property. Your work becomes my work.
       | 
       | This may appeal to naive people (students, hippies, etc.), for
       | whom socialist/communist ideas are attractive, but it's poison in
       | the real world because it eliminates the reward system that
       | motivates most creative work. People work hard for credit or
       | respect, if they're not working for money.
       | 
       | Ask yourself, why does the MIT License
       | (https://opensource.org/licenses/MIT) contain the following text?
       | Copyright <YEAR> <COPYRIGHT HOLDER>            The above
       | copyright notice and this permission notice shall be included in
       | all copies or substantial portions of the Software.
       | 
       | These systems are a mechanism that can regurgitate (digest,
       | remix, emit) without attribution all of the world's open code and
       | all of the world's art.
       | 
       | With these systems, you're giving everyone the ability to
       | plagiarize everything, effortlessly and unknowingly. No skill, no
       | effort, no time required. No awareness of the sources of the
       | derivative work.
       | 
       | My work is now your work. Everyone and his 10-year old brother
       | can "write" my code (and derivatives), without ever knowing I
       | wrote it, without ever knowing I existed. Everyone can use my
       | hard work, regurgitated anonymously, stripped of all credit,
       | stripped of all attribution, stripped of all identity and
       | ancestry and citation.
       | 
       | It's a new kind of use not known (or imagined?) when the
       | copyright laws were written.
       | 
       | Training must be opt in, not opt out.
       | 
       | Every artist, every creative individual, must EXPLICITLY OPT IN
       | to having their hard work regurgitated anonymously by Copilot or
       | Dall-E or whatever.
       | 
       | If you want to donate your code or your painting or your music so
       | it can easily be "written" or "painted", in whole or in part, by
       | everyone else, without attribution, then go ahead and opt in.
       | Most people aren't so totally selfless.
       | 
       | But if an author or artist does not EXPLICITLY OPT IN, you can't
       | use their creative work to train these systems.
       | 
       | All these code/art washing systems, that absorb and mix and
       | regurgitate the hard work of creative people must be strictly opt
       | in.
       | 
       | I say this as a person who writes deep-learning parallel linear
       | algebra kernels professionally.
       | 
       | We've crossed a line here.
        
         | polotics wrote:
         | I disagree that creative individuals have to do anything
         | explicit here: copyright law is pretty clear that the burden of
         | proof of right is with the copier, not the copied. I expect
         | most artists won't be sending invoices for licensing fees just
         | yet, but corps surely will bleed dry anyone that produce
         | unlicensed derivative works that generates any income.
        
         | [deleted]
        
         | gpderetta wrote:
         | Exactly, any coder and artist should learn from scratch without
         | absolutely any exposure to existing code, work of art or even
         | idea. Anything else is outright STEALING!
         | 
         | Excuse me when I make an apple pie from scratch.
        
           | qull wrote:
           | Thats a reducto ad absrudium at best. While you have a point,
           | schools and even museums are generally compensated for
           | providing these training models to the public, to look at it
           | in a ml way.
        
             | gpderetta wrote:
             | Sure, I also had to pay for the books I studied from, but
             | Dr. Tanenbaum is yet to knock at my door to assert
             | copyright on all the code I have written.
        
           | CharlesW wrote:
           | This is a perfect example because, depending on the apples
           | you're using, growing them may have required a license and
           | adherence to licensing requirements.
           | 
           | https://mnhardy.umn.edu/apples/licensing
           | https://provarmanagement.com/cosmic-crisp/
        
           | polotics wrote:
           | Have you maybe possibly previously been exposed to the
           | concept of an argumentation straw man? Feeding actual works
           | of art into an approximation machine, and no expecting the
           | output of said machine to not be owned by the author of the
           | art is making a big assumption I think. There is the word
           | copy in "copyright" and the model did definitely got a copy
           | of the original at source. No matter the dilution, copyright
           | is being breached, as I understand it.
        
           | ysavir wrote:
           | Out of curiosity, how would you feel if someone fed your HN
           | comment history into a ML model, then used that to respond on
           | every HN topic and conversation under the username
           | "othergpderetta"?
        
             | kleer001 wrote:
             | Interesting argument, but it'd be outputting fully sounding
             | word salad garbage.
        
               | gpderetta wrote:
               | So exactly like my comments!!
        
             | gpderetta wrote:
             | My HN history is public, so I wouldn't have a problem with
             | training a model with it. I would have a problem with the
             | model attempting to pass as my self of course.
        
           | JohnJamesRambo wrote:
           | >Excuse me when I make an apple pie from scratch.
           | 
           | You'll have to invent the universe first.
        
             | gpderetta wrote:
             | I expect a big bang anytime now!
        
         | kerblang wrote:
         | As the artist in the article points out, the artwork in the
         | model doesn't belong to her and by current legal standards she
         | has no authority to give permission; of course the corporate
         | owners do have authority, and I'm not even sure you need new
         | laws to enforce the copyright complaint.
         | 
         | I was complaining about all of this when the derivation was
         | based on "the internet" and everyone was being ripped off at
         | once. All the AI-generated art out there is doing the same
         | thing.
         | 
         | Of course most of this is being used to create derivations of
         | trendy pop art, so are we really losing anything? Was there
         | ever any hope for artistic capitalism as something that
         | communicates in meaningful ways beyond the most local of scale?
        
         | vikingerik wrote:
         | Serious question: What is the difference between a human
         | intelligence looking at a work and using concepts from it in
         | their own, compared to an artificial intelligence doing it?
         | 
         | If the copyright violation occured by the AI's inputs looking
         | at the work... how is that different than an image of the work
         | landing on a human's retinas?
        
         | p0pcult wrote:
         | NFTs to the rescue.
        
         | dzink wrote:
         | For generative art, trademarking your name might help prevent
         | people from using it in prompts, but for general copyright,
         | where does the line stand between someone casually publishing
         | every color in the rainbow, every note combination, every
         | letter in the alphabet, and claiming anyone else is infringing
         | on their copyright?
         | 
         | If someone copies your thesis, abstract, poem word for word,
         | that is clear violation of your IP, but we are all remixing
         | words that everyone uses, colors, brush strokes, API terms,
         | programming language keywords, and notes. Copyright law has the
         | fair use doctrine and transformative use is explicitly allowed
         | to allow iteration. There is some level of granularity that is
         | essential to creativity - otherwise one entity can copyright
         | all possible combinations and prevent any creativity from
         | happening legally. If AI goes below that threshold, all of
         | humanity has a chance to iterate far faster and find new spaces
         | and fill new needs for everyone. Humans have been able to draw
         | in the style of Picasso or Monet for centuries. A program doing
         | it is not infringement, just much faster iteration.
        
           | ROTMetro wrote:
           | That is a fake argument. It has been proven wrong on every
           | one of these articles, yet pro-stealing people like yourself
           | keep posting it. You can't copyright such works, only an
           | 'installation' of those works. If you want to talk about
           | copyright, maybe educate yourself on copyright. It's Title
           | 17. https://www.copyright.gov/title17/
        
             | dzink wrote:
             | Which argument, which articles - be specific. AI content is
             | not copyrightable at this stage. Drawing styles are not
             | copyrightable. Name calling and labeling are ad-hominem
             | attacks however and that has no place on HN.
        
               | ROTMetro wrote:
               | People claiming you can copyright a canvas that is just a
               | color (when you can't, you can only copyright the art
               | installation/display). People claiming that you can
               | copyright the alphabet (when you can't). It's just
               | frustrating that HN wants to keep having this discussion
               | but with ZERO basis in actual copyright law, and people
               | making factually inaccurate claims as if they understand
               | it. I had the same issue on a criminal law post. Someone
               | posted completely factually inaccurate information
               | regarding title 18 statutes, and HN blocking me from
               | responding in a timely manner while people were reading
               | the post. This just isn't a forum for informed discussion
               | I guess, but peoples gut feelings and what they THINK
               | copyright law is. Opinions are great, and needed, but
               | established law is a real thing and should be part of a
               | discussion that at it's core is about copyright.
               | 
               | Sorry you feel attacked that I suggested you get educated
               | on the topic at hand when you present theoreticals
               | ALREADY addressed in TITLE 17.
               | 
               | Is your position not that it is OK to steal peoples
               | intellectual property as currently defined by law as long
               | as you run it through a couple algorithms? I apologize if
               | I misunderstood the position you publically staked out.
        
           | burkaman wrote:
           | > A program doing it is not infringement, just much faster
           | iteration.
           | 
           | "Much faster" is absolutely relevant, morally and legally.
           | Visiting a website a bunch of times is not illegal,
           | programmatically DDoSing it is. Having a private conversation
           | with someone and writing down what they said afterwards is
           | not illegal, but recording the conversation and perfectly
           | reproducing it without their permission often is. Shouting at
           | someone in public is generally ok, having a drone follow them
           | around anytime they're in public playing a recording of
           | whatever you shouted is probably not ok.
           | 
           | Computers are not people. Just because it's ok for a person
           | to do something, doesn't mean it's ok to have a computer do
           | the same thing a billion times per second.
        
             | dzink wrote:
             | DDoSing is bad not because of the speed but because it
             | overwhelms the infrastructure a product is designed for.
             | Doing it to your own computer by making it crunch AI models
             | until it runs out of memory is perfectly legal and
             | iterative. Printing pages of a book in seconds vs
             | dedicating lives of people to hand draw each letter in the
             | monastery is iteration.
             | 
             | Computers are not people. Computers are iteration tools
             | people use to free up precious lifetime they have and bring
             | more value to the world. If you are a human who trains
             | 10000 hrs to invest like Paul Graham, or draw like Thomas
             | Kincade, or play the piano, or operate as a top brain
             | surgeon, you have spent a fraction of your life to do this
             | fast and reap the rewards. But that fraction of your life
             | has tremendous cost on society. Many people paid with their
             | time and money to feed you, teach you, house you, during
             | that time and during your upbringing which allowed you to
             | have those 10k hrs to dedicate to this task. Now all of
             | that work can be used by you to do exponentially more with
             | your precious life. Instead of spending days or years
             | making a portrait, you'd spend seconds. Now you can find
             | higher purpose and solve much bigger problems - instead of
             | asking for 100 to hand draw a portrait for a few hundred
             | people in your lifetime, you could create one for every
             | teen who needs a boost in their self esteem and raise their
             | confidence and ability to cope with challenges in their
             | life at massive scale.
             | 
             | More importantly, since there is a huge scarcity of people
             | trained to fulfill each niche need that forms a bottleneck
             | on society's capacity to use that. Imagine if instead of
             | airplane we counted on a few trained supermen to fly people
             | who needed to cross places fast by hand. How many people
             | would die before they see the world or are taken to a
             | doctor, etc. The world can't survive on superheroes or
             | super trained people. The world can do more with the time
             | and lives of people in it.
        
               | burkaman wrote:
               | I disagree but this is a reasonable perspective. One
               | specific point:
               | 
               | > instead of asking for 100 to hand draw a portrait for a
               | few hundred people in your lifetime, you could create one
               | for every teen who needs a boost in their self esteem and
               | raise their confidence and ability to cope with
               | challenges in their life at massive scale.
               | 
               | This is a misunderstanding that reminds of those startups
               | that were like "we realized people love getting hand-
               | written cards, so we built a product to learn your
               | handwriting and generate them for you!" No, the effort is
               | the point. For people who like those cards it's not about
               | the aesthetics of handwritten text, it's about knowing
               | somebody dedicated some of their limited time to you
               | personally. A depressed person is not going to be cheered
               | up by an auto-generated portrait, even if it's
               | indistinguishable from one a human artist spent 12 hours
               | on. You can't "scale" human connection like this, unless
               | you hide the fact that robots are involved.
               | 
               | I'm not saying all technology is bad. I think robo-
               | surgeons would be great if they can save more lives, even
               | if they put human surgeons out of work. In this
               | particular domain, right now it seems like these tools
               | have the potential to discourage future generations of
               | artists, which would be self-defeating because the models
               | are not AI and will stagnate without additional training
               | data. I don't think they should be banned, but I think we
               | should take human artists' concerns seriously, not co-opt
               | someone's artistic identity if they ask us not to, and
               | try to make sure we think about the unintended
               | consequences of a powerful new tool.
        
               | dzink wrote:
               | People who love to walk will continue to walk even when
               | there are bikes, cars, airplanes, self driving tech, and
               | teleportation etc, available to them. AI art does not
               | discourage artists any more than restaurants discourage
               | home cooks who enjoy cooking. In all of those scenarios
               | the tech caters to people who are in need of a task done
               | and not in desire to spend their life on the craft.
               | 
               | There will always be unintended consequences and some
               | will be severe. But what is happening now is pent up
               | demand that finally found an outlet - like a bunch of
               | high pressure mountain water that found a hole through a
               | cave wall and into the ocean - it is gushing.
               | 
               | Instead of blind fear of change, I try to see the value
               | previously unseen and it is tremendous. As the creator in
               | the article said she does not see her true art in the
               | stable diffusion creations: the eyes that speak to
               | character in each character, the poses that show
               | confidence or query, or passion. Instead she sees images
               | that mimic her style of drawing.
               | 
               | I do see how someone may choose to not become an artist
               | for a living because AI art becomes so ubiquitous that
               | they could never make a living with a paintbrush. BUT,
               | with so much 80-90% of desired art generated by AI, there
               | will now be huge demand for skilled artists who can take
               | a generated image the rest of the way to desired results.
               | I trust that human ambition and taste always expands past
               | superhuman capabilities. The artists if tomorrow will
               | have much different brushes than those of yesterday and
               | be far better and more productive than those of
               | yesterday. There will likely be 3D art in the real world
               | and universes to explore in the virtual one. I'm more
               | concerned we will and are running out of space to store
               | our contraptions. Data storage manufacturers will be
               | thriving.
        
         | avereveard wrote:
         | you can get copilot to regurgitate copyrighted code verbatim,
         | but I haven't seen stable diffusion recreating copyright works
         | yet, which is quite an important difference.
        
           | ROTMetro wrote:
           | Wasn't it regularly spitting out whole watermarks?
        
             | avereveard wrote:
             | was the image underneath watermarked, or it just reproduced
             | the watermark style over an unrelated image?
        
         | kleer001 wrote:
         | At the moment there's no legal protection for style in an of
         | its self. Additionally there maybe (and should be) if this
         | style-capture actually displaces the artists they're apeing.
         | But I don't see that happening. IMHO its a tempest in a teacup.
         | 
         | Why? Because Ai generated "art" is a soupy mess and real life
         | human artists can speak and understand colloquial language,
         | work quickly, and develop new styles based on new direction.
         | 
         | But then again maybe we're looking at the death of a widespread
         | industry like when gigantic industrial looms came on the scene,
         | but I highly doubt it.
         | 
         | Then again, last of all, I do see a future where AIs generate
         | full feature length photo-real movies in minutes based on
         | prompts and cheaply.
        
       | asciimov wrote:
       | I think it's interesting that for the past 50+ years we have been
       | having an ongoing philosophical debate about the ethics of
       | genetic manipulation. The warning stories are abound in sci-fi,
       | and though we have the tools to do so, we use extreme restraint
       | in their research.
       | 
       | Yet we haven't had such discussions in computer science about the
       | use of AI. While other fields wrestle with the ethics of doing
       | things, we have had no such discussions. (Due to the nature of
       | our education we are blind to the demands of ethics.) We have
       | made the tools so easy to use, and so accessible, that even if we
       | should discuss the implications of said tools the cat is already
       | out of the bag.
       | 
       | Lately, I've been thinking about that poem "First they came..."
       | by Martin Niemoller and wondering if now is the time to heed it's
       | warning.                   First they came for the socialists,
       | and I did not speak out--              Because I was not a
       | socialist.              Then they came for the trade unionists,
       | and I did not speak out--              Because I was not a trade
       | unionist.              Then they came for the Jews, and I did not
       | speak out--              Because I was not a Jew.
       | Then they came for me--and there was no one left to speak for me.
        
         | izzydata wrote:
         | There are a lot of science fiction stories warning about AI.
         | Although, they are about actual AI and not machine learning.
         | Personally I think it is actually literally impossible to make
         | AI built from conventional computer hardware. Anything you
         | could ever do as a computer scientist can only ever result in
         | more sophisticated, but not sentient algorithms.
        
           | asciimov wrote:
           | True, we do have warnings about human level AI... but we
           | never really cover the Machine Learning is gonna take all
           | y'all's jobs.
        
         | m3047 wrote:
         | I've been musing for several years that there doesn't seem to
         | be an organized "philosophy of computer science".
        
           | asciimov wrote:
           | I agree, we need some kind of organized thoughts on
           | computing. Heck just a straight up ethics course would be a
           | huge improvement. I didn't realize until after I graduated
           | that most of the hard sciences require undergrad ethics, and
           | if you do any graduate work you will take more ethics
           | courses.
        
       | savant_penguin wrote:
       | Maybe I'm oversimplifying but the question is: "does an artist
       | _own_ a style?"
       | 
       | To me the answer is no.
        
         | theptip wrote:
         | I agree with your answer, but not that your question is the
         | question. Some that I think are closer:
         | 
         | Does an artist own their name? Yes. I can't publish a work of
         | art and say "authored by savant_penguin".
         | 
         | Can I sell art with product name "in the style of
         | savant_penguin"? I think this varies by jurisdiction; it's not
         | going to be legal in the EU I think. It might well be in the
         | US.
         | 
         | Edit: fleshed out more thoroughly by someone else already:
         | https://news.ycombinator.com/item?id=33423857
        
       ___________________________________________________________________
       (page generated 2022-11-01 23:00 UTC)