[HN Gopher] Adobe Firefly: AI Art Generator
___________________________________________________________________
Adobe Firefly: AI Art Generator
Author : adrian_mrd
Score : 612 points
Date : 2023-03-21 13:55 UTC (9 hours ago)
(HTM) web link (www.adobe.com)
(TXT) w3m dump (www.adobe.com)
| NaN1352 wrote:
| What's interesting to me is how this only works because of prior
| art.
|
| BUT, when prior art will be AI-assisted if not 99% generated art,
| from a pool of prior human art ever so slowly diminishing...
| where is this going?
|
| For one, "art" can only lessen in value. Perhaps physical art
| will grow in value as digital art's "made by without AI" tag
| becomes unprovable and meaningless.
|
| I think it's bad. Whomever provides these tools is not refilling
| the pool of prior human art, only muddying it up. Therefore
| everything will converge. It was quite obvious already the way
| eg. most webapps nowadays have the same boring design... but this
| is worse.
|
| But I don't know it must be the inevitable evolution, perhaps
| this is how we will end our differences... as human's "collective
| mind" becomes more and more evident.
| Thorentis wrote:
| As an accelerationist, I can see an upside. Human culture has
| been in decline for decades, with mainstream art (of all kinds)
| rapidly declining in creativity and value. There are always
| exceptions, but I think this was and is the trend.
|
| This AI trend will turn our attention back to what it truly
| means to be an artist. From the muddy waters of AI art will
| shine the true works of art that only humans are capable of
| producing. This will raise the barrier to entry and increase
| true arts value. This imo will be a good thing.
| joe_the_user wrote:
| This could be a problem but it seems to like Adobe and other
| stock image owner may be in a better position to deal with it
| than companies scanning things from the open net.
|
| Lots of arts/craft are kept alive because they form the basis
| for more automated processes and this may well continue with
| simple painting and photography.
| faizshah wrote:
| At this point someone is going to make a startup off just
| managing your AI waitlists. (Kidding)
| turnsout wrote:
| So tempted to throw GPT4 at this problem and launch it today.
| petargyurov wrote:
| Will there be a waitlist?
| turnsout wrote:
| Of course, that would be at least 50% of the joke! haha
| neoromantique wrote:
| AI Waitlist Management Solutions (AWMS) is a startup that
| aims to streamline and manage the ever-increasing demand
| for AI services by providing a one-stop platform for
| tracking and managing AI waitlists. Leveraging the
| advanced capabilities of GPT-4, our service will analyze
| the market, monitor AI waitlist positions, and provide
| customers with real-time updates on their status.
| Additionally, AWMS will offer recommendations on
| alternative services and provide estimated wait times for
| better decision-making. Our target audience includes
| businesses and individuals who require AI services and
| are looking for a way to efficiently manage their place
| in multiple queues, as well as AI service providers
| seeking to optimize their waitlist management processes.
|
| To further enhance our value proposition, we will
| incorporate a waitlist for our own platform, adding a
| sense of exclusivity and generating buzz around our
| service. This humorous, self-referential twist will serve
| as a unique marketing strategy, setting us apart from
| competitors and attracting potential clients. Our revenue
| model will include a tiered subscription plan, offering
| various features and services at different price points
| to cater to a wide range of customers. With a strong
| focus on customer satisfaction and continuous
| improvement, AWMS will strive to become the go-to
| solution for managing AI waitlists and revolutionize the
| way users access and interact with AI services.
| nathanasmith wrote:
| And they say AI can't be funny.
| yieldcrv wrote:
| There were/are airdrop farmers doing that in the crypto space
|
| Airdrops can be very lucrative (5, 6 figures with market depth
| supported by VCs allowing easy conversion to cash)
| lelandfe wrote:
| From that page's FAQs:
|
| > trained on a dataset of Adobe Stock, along with openly licensed
| work and public domain content where copyright has expired
|
| > We do not train on any Creative Cloud subscribers' personal
| content. For Adobe Stock contributors, the content is part of the
| Firefly training dataset, in accordance with Stock Contributor
| license agreements. The first model did not train on Behance.
|
| Not sure what "first model" means there.
|
| Also interesting:
| https://helpx.adobe.com/stock/contributor/help/firefly-faq-f...
|
| > During the beta phase of Adobe Firefly, any Adobe Firefly
| generated assets cannot be used for commercial purposes.
|
| > _Can I opt [my Adobe Stock content] of the dataset training?_
|
| > No, there is no option to opt-out of data set training for
| content submitted to Stock. However, Adobe is continuing to
| explore the possibility of an opt-out.
| spookie wrote:
| I strongly suggest everyone to read this:
| https://techcrunch.com/2023/01/06/is-adobe-using-your-photos...
|
| I hope it's fair to say that they do train on your work.
| mesh wrote:
| We dont. More info here:
|
| >The insights obtained through content analysis will not be
| used to re-create your content or lead to identifying any
| personal information.
|
| https://helpx.adobe.com/manage-account/using/machine-
| learnin...
| spookie wrote:
| Thanks for the response. This and the proposed compensation
| for stock contributions demonstrate that you are taking the
| right and correct path.
|
| I hope you do continue doing so. I'm all but disappointed
| in others' approaches in this area, and it paints a very
| bad image for the potential of AI as tools.
| theFletch wrote:
| > trained on a dataset of Adobe Stock, along with openly
| licensed work and public domain content where copyright has
| expired
|
| As someone who has contributed stock to Adobe Stock I'm not
| sure how I feel about this. I'm sure they have language in
| their TOS that covers this, but I'm guessing all contributors
| will see nothing out of this. Fine if this is free forever, but
| this is Adobe.
| judge2020 wrote:
| Still on the fence for whether or not you should be able to opt
| out of training (I'm sure many artists would love to "opt out"
| of humans looking at their art if the human intends to, or
| might, copy the artists' style at some point).
| egypturnash wrote:
| hi, I'm an artist, I do not give a shit about other humans
| looking at my work, I am _delighted_ when a younger pro comes
| to me and thanks me for what they learnt from my work. That
| tells me they were fascinated enough with it to _look_ at it
| and _analyze_ it again and again. I made a connection with
| them via my drawing skills.
|
| I am catastrophically unhappy at the prospect of a
| corporation ingesting a copy of my work and stuffing it into
| a for-profit machine without my permission. If my work ends
| up significantly influencing a generated image you love,
| _nobody will ever know_. You will never become a fan of my
| work through this. You will never contribute to my Patreon.
| You will never run into me at a convention and tell me how
| influential my work was to something in your life. Instead,
| the corporation will get another few pennies, and that is
| all.
| teaearlgraycold wrote:
| Is there a license that exists that you could put on your
| work to prevent its use in model training?
| egypturnash wrote:
| Not as far as I know. There needs to be one, and
| internet-scrapers need to be able to be sued for
| ludicrous amounts of money if they violate it, IMHO.
| Training AI models feels way outside the scope of what I
| think "fair use" should cover.
| judge2020 wrote:
| They do not currently operate on the basis of fair use.
| Operating as a human by looking at images and learning
| how to draw or paint is not 'fair use', it's a right
| given to you by either God or Mother Nature, so the legal
| basis for neural nets learning from other art is that
| it's learning like a human and creating new art from just
| knowing what art human think is good and optimizing its
| creation of art to mimic if not borrow the same qualities
| while still making something new.
| egypturnash wrote:
| As far as I know there are no religions or legal systems
| that posit that there are _any_ rights inherently given
| to machines.
| astrange wrote:
| Non-commercial internet scraping for model creation is
| explicitly legal in the EU; the result of a model trained
| on a billion images really has nothing to do with anyone
| in particular's art. Although the model would likely work
| pretty well without ever seeing any "art" images.
| judge2020 wrote:
| If, as I alluded to, you and the SCOTUS (and other
| courts) interpret AI art as similar enough to humans
| where the 'training' process is analogous to a human
| looking at art and learning how to create good art (or
| even copy another artist's style), then the license you
| apply to art does not matter, because it'd just be
| "learning" about how art works, and not any actual usage
| of the original work. In this case the AI would be
| considered a human for the purposes of copyright
| infringement, where it would infringe on the original
| work if it recited or recreated any single work from
| memory without any substantial changes to turn it into
| either a parody (fair use) or its own work separate from
| the images it has learned from, even if it mimics the art
| style of any single artist (since artists can't copyright
| their styles).
| BornInChicago wrote:
| I'm an artist as well. I think this can happen whenever
| anyone sees your art anywhere online. They can copy it.
| They won't tell you about it. They might copy it really
| well. And they might copy not just your technical style,
| but what your art says and how it says it.
| madeofpalk wrote:
| Should Github Copilot be trained on private, closed-source,
| proprietary code?
| grondo4 wrote:
| Yes, AI should be trained on every piece of information
| possible. Am I allowed to become a better programmer by
| looking at private, (illegally leaked) closed-source,
| proprietary code?
| mrelectric wrote:
| You're obviously not
| grondo4 wrote:
| Is that a joke?
|
| Yes you are allowed to read closed-source, proprietary
| code and become a better programmer for it.
|
| I've decompiled games to learn how they structure their
| code to improve the structure of games that I program. I
| had no right to that code and I used it to become a
| better programmer just like AI do.
|
| That's not copyright infringement. You have a right to
| stop me from using your code, not learning from it.
| Dalewyn wrote:
| Now granted most EULAs and Terms of Service documents
| aren't legally enforced, most software licenses
| explicitly prohibit decompiling or otherwise
| disassembling binaries.
|
| So, yes: They have a right to stop you from "learning"
| from their code. If you want that right, see if they're
| willing to sell that right to you.
| grondo4 wrote:
| > They have a right to stop you from "learning" from
| their code.
|
| They absolutely do not, and as pedantic as it may be I
| think it's very important that you and everyone else in
| this thread know what their rights are.
|
| If you sign a contract / EULA that says you cannot
| decompile someone's code than yes you are liable for any
| damages promised in that contract for violating it.
|
| But who says that I ever signed a EULA for the games I
| decompiled? Who says I didn't find a copy on a hard drive
| I bought at a yard sale or someone sent me the decompiled
| binary themselves?
|
| Those people may have violated the contract but I did
| not.
|
| There is no law preventing you from learning from code,
| art, film or any other copyrighted media. Nor is there
| any law (or should there be any law IMO) that stops an AI
| from learning from copyrighted media.
|
| Learning from each other regardless of intellectual
| property law is how the human race advances itself. The
| fact that we've managed to that automate human progress
| is incredible, and it's very good that our laws are the
| way they are that we can allow that to happen.
| Alchemista wrote:
| This is a pretty extreme stance. There is a fine line
| between "learning from" proprietary code and outright
| stealing some of the key insights and IP. Sometimes it
| takes a very difficult conceptual leap to solve some of
| the more difficult computer science and math problems.
| "Learning" (aka stealing) someone's solution is very
| problematic and will get you sued if you are not careful.
| skeaker wrote:
| If you think that's extreme, wait until you hear my
| stance that code shouldn't be something that you can own
| (and can therefore "steal") to begin with.
| [deleted]
| madeofpalk wrote:
| No https://en.wikipedia.org/wiki/Clean_room_design
| grondo4 wrote:
| I didn't ask if can I use other people's proprietary
| closed source code, obviously they have the right to that
| code and how it's used.
|
| I asked if I can learn from that code, which obviously I
| can. There is no license that says "You cannot learn from
| this code and take the things you learn to become a
| better programmer".
|
| That's exactly what I do and it's exactly what AI do.
| ghaff wrote:
| If you study a closed source compiler (or whatever) in
| order to write a competitive product, and the company who
| wrote the original product sues you for copying it, as
| the parent suggests, you're on shaky legal ground. Which
| is why clean room design is a thing.
| nickelpro wrote:
| A clean room design ensures the new code is 100%
| original, and not a copy of the base code. That is why it
| is legally preferable, because it is easy to prove
| certain facts in court.
|
| But fundamentally the problem is copyright, the copying
| of existing IP, not knowledge. grondo4 is completely
| correct that there is no legal framework that prevents
| _learning_ from closed-source IP.
|
| If such a framework existed, clean room design would not
| work. The initial spec-writers in a clean room design are
| reading the protected work.
| ghaff wrote:
| >The initial spec-writers in a clean room design are
| reading the closed-source work.
|
| Right. And they're only exposing elements presumably not
| covered by copyright to the developers writing the code.
| (Of course, this assumes they had legitimate access to
| the code in the first place.)
|
| Clean room design isn't a requirement in the case of,
| say, writing a BIOS which may have been when this first
| came up. But it's a lot easier to defend against a
| copyright claim when it's documented that the people who
| wrote the code never saw the original.
|
| Unlike with patents, independent creation isn't a
| copyright violation.
| nickelpro wrote:
| I don't understand what your point here is. The initial
| spec-writers learned from the original code. This is not
| illegal, we seem to be agreed on this point. grondo made
| the point that learning from code should not be
| prohibited.
|
| What are you contesting?
| ghaff wrote:
| My point was that, assuming access to the code was legit,
| and the information being passed from the spec-writers to
| the developers wasn't covered by copyright (basically
| APIs and the like), it's a much better defense against a
| copyright claim that any code written by the developers
| isn't a copyright violation given they _never saw_ the
| original code.
| bioemerl wrote:
| I think you're missing the one big flaw here. How exactly
| do you have access to closed source code?
|
| Did you acquire it illegally? That's illegal.
|
| Was it publicly available? That's fine, so long as you
| aren't producing exact copies and violate normal
| copyright law.
| supermatt wrote:
| > I asked if I can learn from that code, which obviously
| I can.
|
| Did you actually read the link you were given? Clean room
| design is because you may inadvertently plagiarize
| copyrighted works from your memory of reading it.
|
| i.e. the act of reading may cause accidental infringement
| when implementing the "things you learn"
| grondo4 wrote:
| > i.e. the act of reading may cause accidental
| infringement when implementing the "things you learn"
|
| Surely you know this isn't the case right? Maybe you're
| confused because we're talking about programming and not
| a different creative artform?
|
| Great artists read, watch and consume copyrighted works
| of art all day, if they didn't they wouldn't be great
| artists. And yet the content they produce is entirely
| there own, free from the copyright of the works they
| learned from.
|
| What's the difference then in programming? Why can an
| artist be trusted not to reproduce the copyrighted works
| that they learned from but not the programmer?
| supermatt wrote:
| > Why can an artist be trusted not to reproduce the
| copyrighted works that they learned from but not the
| programmer?
|
| They cant. which is why that quote "Good artists copy,
| great artists steal" exists.
|
| AI has already been shown to be "accidentally"
| reproducing copyrighted work. You too, can do the same.
|
| Its likely no-one (including yourself) will ever be aware
| of it - but strictly speaking it would still be copyright
| infringement. This is the relevance and context of the
| link you were given.
| nickelpro wrote:
| If everyone is infringing copyright, no one is infringing
| copyright. This is a dead-end thought.
| waboremo wrote:
| Artists get into trouble all the time for producing works
| very close to something that already exist. That's like
| the number one reason artists get shunned in the
| communities they were in.
| nickelpro wrote:
| Every filmmaker watches movies
|
| Every author reads books
|
| Every painter view paintings
|
| Unless you're arguing that every single artist across
| every field of artistic expression is constantly being
| jeopardized by claims of copyright infringement, this is
| a nonsensical point to make.
| waboremo wrote:
| But they're not creating similar works, unlike AI which
| IS. Why is this so complicated for you?
| BornInChicago wrote:
| I would seriously question if this happens all the time,
| these days. The whole copyright thing is way behind the
| digital and internet revolution. Look at what the Prince
| case did for transformation copyright fair use.
| astrange wrote:
| The process of online artists shaming each other doesn't
| really have anything to do with the legal system, though
| they all act like it is.
| nickelpro wrote:
| Sure but the infringement is the problem, not the ideas
| themselves.
|
| You're describing thought crime right now. It's not
| illegal to learn things.
| supermatt wrote:
| And if you "learn" something and accidentally rewrite it
| verbatim? Thats what clean-room design is to protect
| against
| nickelpro wrote:
| Rewriting the code verbatim and distributing it would be
| a copyright infringement, yes, you do not have a write to
| distribute code written by other people
|
| That's completely different from reading and learning
| from code, which is what grondo described.
|
| Clean room design _relies_ on this, in a clean room
| design you have one party read and describe the protected
| work, and another party implement it. That first party
| reading the protected work _is learning from closed-
| source IP_.
| supermatt wrote:
| > That's completely different from reading and learning
| from code, which is what grondo described.
|
| AI (e.g. copilot) has already been shown to break
| copyright of material in its training set. Thats the
| context of this whole thread.
| nickelpro wrote:
| Perhaps, but not of Grondo's point.
|
| If an AI infringes on copyright then it infringes on
| copyright, that's unfortunate for the distributors of
| that code.
|
| Humans accidentally infringe on copyright sometimes too.
| It's not a unique problem to machine learning. The
| potential to infringe on copyright has not made
| observing/learning/watching/reading copyright materials
| prohibited for humans, nor should it or (likely) will it
| become prohibited for machine learning algorithms.
| supermatt wrote:
| > Perhaps, but not of Grondo's point.
|
| Grondo said that AI should be given access to all code,
| including private and unlicensed code.
|
| He was given a link to Clean Room Design demonstrating
| the problem with the same entity (the AI) reading and
| learning from the existing code and the risk of
| regurgitation when writing new code.
|
| He goes on to say thats what he does, which doesn't
| change that fact.
|
| > Humans accidentally infringe on copyright sometimes
| too.
|
| Indeed we do, and its almost entirely unnoticed, even by
| the author.
|
| > nor should it or (likely) will it become illegal for
| machine learning algorithms.
|
| If those machine learning algorithms are taking in
| unlicensed material and then they later output unlicensed
| and/or copyrighted material, then they are a liability.
| Why would you want that when you can train it otherwise
| and be sure it NEVER infringes others IP? Its a no-
| brainer, surely. Or are you assuming there is some magic
| inherent in other peoples private code?
| nickelpro wrote:
| > If those machine learning algorithms are taking in
| unlicensed material and then they later output unlicensed
| and/or copyrighted material, then they are a liability.
| Why would you want that when you can train it otherwise
| and be sure it NEVER infringes others IP?
|
| Because it could produce a better model that produces
| better code.
|
| You're now arguing a heavily reduced point. That a model
| that trained on proprietary code is _at higher risk_ of
| reproducing infringing code is not a point under
| contention. The clean room serves the same purpose, it is
| a risk mitigation strategy.
|
| Risk mitigation is a choice, left up to individuals.
| Maybe you use a clean room design, maybe you don't. Maybe
| you use a model trained on closed-source IP, maybe you
| don't. There are risks associated with these choices, but
| that is up to individuals to make.
|
| The choice to observe closed source IP and learn from it
| shouldn't be prohibited just because some won't want to
| assume that risk.
| ClumsyPilot wrote:
| > Am I allowed to become a better programmer by looking
| at private code?
|
| Your argument is based on the idea that you and AI should
| have the same rights?
|
| I do not see how this works unless AI going to be
| entitled to minimum wage and paid leave?
|
| Otherwise it is just a money grab
| sebzim4500 wrote:
| He's not saying that he and the AI have the same rights,
| rather that he and the person running the AI have the
| same rights.
| omoikane wrote:
| One motivation for artists to create and share new work
| is the expectation that most people won't just outright
| copy their work, based on the social norm that stealing
| is dishonorable. This social norm comes with some level
| of legal protection, but it largely depends on a common
| expectation of what is considered stealing or not.
|
| Once we have adopted the attitude that we can just copy
| as we please without attribution, it would be much more
| difficult to find motivated artists, and we would have
| failed as a society.
| spoiler wrote:
| It's not quite the same... And I'm not sure how people on HN
| of all places are failing to grasp that these algorithms
| aren't sentinet, much less people.
|
| I think this is incredibly cool technology, but using other
| people's property without their consent is stealing (I'm not
| talking about legality, but morality here).
|
| The second reason why it's not the same is that people can't
| look at X million pictures and become proficient in
| _thousands_ of different art styles. So, again its not
| legality but more about ethics.
|
| I guess different people have lower moral standards than
| others, and that's always been part of the human condition.
|
| With all that out of the way, I think artists won't get
| replaced, because these tools don't really produce
| anything... Substantial on their own. An artist still needs
| to compose them to tell a story. So, all this nonsense about
| how it will replace artists is misguided. It can only replace
| some parts of an artist's workflow.
|
| I know there was an art competition where someone won with a
| piece that was AI-aided, but honestly it looked like colour
| sludge. The only thing that was really well executed in it
| was the drama created by the contrast from sharp changes in
| values near the centre of that work, and something vaguely
| resembling a humanoid silhouette against it. You could've
| called it abstract art if you squinted.
| jonahrd wrote:
| But these stock image artists provided consent when signing
| a contract and selling their work to Adobe. The contract is
| pretty clear that you basically don't own the work anymore
| and Adobe can do whatever they want with it.
|
| If you don't like it, don't sign the contract.
| spoiler wrote:
| Oh right, sorry. I was talking generally, not
| specifically to Firefly.
|
| Yeah, I think Adobe is a publisher and as such, you give
| it distribution rights. So, I agree with you on this
| case.
|
| Slightly tangential, but Imagine a singer or actor's
| voice of face being used without their consent just
| because the publisher has rights to distribute their
| performance. That probably wouldn't fly very well, and I
| assume this doesn't fly with some artists either (even
| though they signed a contract).
|
| I assume publishers will probably have an AI consent form
| soon.
|
| It's all very exciting, and I hope we don't ruin it with
| greed and disregard for the works of the very people that
| made these technologies so successful. Like, if it
| weren't for the scraped works, the AI feats would've been
| both much more underwhelming and and much more expensive
| to train.
| ryanjshaw wrote:
| I'm curious, do you hold the same beliefs about text?
|
| Do you think ChatGPT should not be allowed to read books
| and join ideas across them without paying the original
| authors for their contribution to the thought?
| spoiler wrote:
| I do! If they aren't in some way public domain, then the
| authors should have a say, or be if the work is
| purchased.
|
| I have a bit of cognitive dissonance on the subject of
| blog posts or articles in general, since those are kinda
| public domain? But I still think it should be opt in/out-
| able.
|
| I realise I'm also a bit of a hypocrite since I've
| enjoyed playing with these AI tools myself, and I realise
| they'd be nowhere as cool if they didn't have access to
| such large datasets.
| lelandfe wrote:
| IANAL: Authorship is protected in the US by default
| https://www.copyright.gov/engage/writers
|
| In order for blog posts (or other written works) to be in
| the public domain, authors must explicitly waive those
| rights. But, not that it needs saying, copyright's
| applicability in training data is basically the entire
| subject of debate right now.
| https://creativecommons.org/2023/02/17/fair-use-training-
| gen...
| spoiler wrote:
| Ah, I had no idea that was protected too! That's good. I
| think the reason I was morally on the fence was that
| people already put blog posts out with the intent of
| sharing their knowledge with the rest of the Internet...
|
| So my assumption was that anything trained on it will
| just help further expand that knowledge.
|
| Although I do realise now as I'm typing this--AI could
| diminish their audience, clout and motivation, which
| isn't what I'd want.
| dahwolf wrote:
| "I guess different people have lower moral standards than
| others, and that's always been part of the human
| condition."
|
| Instead of lower morality, I'd say it's selective morality.
|
| I bet quite a few artists (rightfully) feeling threatened
| by this phenomenon would have absolutely no problem
| watching a pirated movie, using an ad blocker, read
| paywall-stripped articles, the like....whilst this is
| principally the same thing: taking the work of others
| without consent or compensation.
| throwthrowuknow wrote:
| > I'm sure many artists would love to "opt out" of humans
| looking at their art if the human intends to, or might, copy
| the artists' style at some point
|
| I'm pretty sure that would be a death knell for art. Where
| are these mythical artists who have never looked at anyone
| else's art?
| omoikane wrote:
| It's the same problem with fake copies of Van Gogh and so
| forth, except historically those fakes were produced at a
| much slower rate because of the time needed to master the
| skills to produce those fakes. With modern tools, those
| fakes could be mass produced, while the original artists
| are still alive.
| judge2020 wrote:
| > It's the same problem with fake copies of Van Gogh and
| so forth, except historically those fakes were produced
| at a much slower rate because of the time needed to
| master the skills to produce those fakes. With modern
| tools, those fakes could be mass produced, while the
| original artists are still alive.
|
| Those people got in trouble for recreating specific
| works, or creating new works in his style and defrauding
| people by saying they were originals. Safe to say that
| not disclosing "this is not actually a work created by
| <artist>, just in their style" would be grounds for
| fraud, especially if you were to sell it.
| zirgs wrote:
| I can train a LORA on my own PC in less than an hour. Good
| luck opting out of that.
| waboremo wrote:
| What does that matter? Generate as much as you want for
| your own personal reasons.
|
| It's about actually being able to use that content legally
| (and commercially) that matters to most in this
| conversation.
| zirgs wrote:
| AI training is a one-way operation - you can't
| reconstruct the dataset from a model/lora/ti. Unless it's
| something really blatant like real people, widely
| recognised copyrighted characters like Batman or Iron Man
| - it's going to be hard to prove that someone used your
| art to train an AI model. I'm not required to publish my
| model or the datasets that I used anywhere.
| madeofpalk wrote:
| I can trivially torrent movies at home also. But then going
| out and selling them is widely accepted as being "wrong".
| smrtinsert wrote:
| I would not be surprised if behind the scenes they are starting
| the lobbying engine to safely mine whatever they want. The
| universe of existing content out there is simply too enticing
| and out there. This is Google Search vs authors all over again.
| tonmoy wrote:
| From Adobe's reddit post[1]: > We are developing a compensation
| model for Stock contributors. More info on that when we release
|
| If they can properly compensate the stock contributors based on
| usage then I think this is a very fair approach.
|
| [1]
| https://www.reddit.com/r/photoshop/comments/11xgft4/discussi...
| theFletch wrote:
| I didn't see this before I posted, but I'm glad that's the
| case. In fact, it might be great for contributors that don't
| have a large library or aren't ranked as well.
| kitsunesoba wrote:
| It's also worth considering is that there are quite a number of
| fraudulent images on Adobe Stock, which means that the Firefly
| dataset without a doubt contains some amount of unlicensed
| material.
| rchaud wrote:
| LLM-based AI is tech's equivalent of mortgage-backed
| securities. Lump in the bad stuff with the legitimate, hope
| no one notices, and when they do, blame the inherent black-
| box nature of the product.
| [deleted]
| villgax wrote:
| Wow zero mention of any competition, these guys will get
| decimated with proper local tooling for editing.
| daveslash wrote:
| When I look at older magazines, photos, billboards, and
| advertisements -- today's world of media is so much more colorful
| and vibrant than it was in the 60s & 70s. E.g. This [0] vs. this
| [1].
|
| With the race to the bottom for generating high-quality,
| scalable, rich and colorful illustrations at almost no cost in
| massive quantities, I'm envisioning the world is about to get
| even more colorful and vibrant.
|
| [0]
| https://cdn.shopify.com/s/files/1/0050/4252/files/carros_ant...
|
| [1]
| https://3.bp.blogspot.com/-MzMrYtY_Ooo/US_iZSd3noI/AAAAAAABC...
| Bjorkbat wrote:
| I won't argue with you that they have gotten more colorful, but
| I feel like they've also lost a lot of originality as well.
| Advertising in the 60s and 80s seemed more fun and witty, and
| you had commercial artists like Andy Warhol giving ads a wholly
| unique style, despite the limitations of the craft back then.
|
| Nowadays, we have Corporate Memphis
| (https://en.wikipedia.org/wiki/Corporate_Memphis).
|
| Funny really. Despite creative tooling opening up new
| possibilities, I see the end result as a net loss in quality.
| Cheap generally wins out over great.
|
| Ever the optimist, I like remind myself that in a world where
| good loses out to mediocre every single time, it's easier to
| stand out for being great.
| rchaud wrote:
| Counterpoint: BYTE Magazine artwork (1980s) compared to macabre
| SVG libraries of wavy modular body parts stitched together to
| show human activity.
|
| [0]: https://api.time.com/wp-
| content/uploads/2014/04/bytecover.jp...
|
| [1]:
| https://upload.wikimedia.org/wikipedia/commons/thumb/e/ee/Co...
| timeon wrote:
| It seems to me that at one point people were fed-up with
| vibrant style of 90s. Reaction was minimalism and later flat
| design and Corporate Memphis.
|
| Now the pendulum is swinging back.
| varispeed wrote:
| Race to the bottom is only when it comes to wages.
|
| Companies are making profits not seen before.
|
| We had a blip in the history where everyone could participate
| in the economy - from building prototype in a garage to
| becoming relatively wealthy.
|
| Now we have wage slavery and access to market gatekeeped by VC
| and banks.
|
| Not a great future.
| furyofantares wrote:
| Alright so why's your first example look so much more colorful
| to me?
| daveslash wrote:
| You're right. I think I chose the wrong terms with
| colorful/vibrant. Perhaps better words would have been
| busy/detailed/intricate/textured.
|
| These older advertisements might have as much color, but
| they're pleasing in their simplicity. Even a solid color can
| be bright and beautiful, but it's not nearly as busy as a
| collection of overlapping gradients. I walk back my use of
| the word color in favor of the word busy. Thank you for
| pointing that out.
| mschuster91 wrote:
| > With the race to the bottom for generating high-quality,
| scalable, rich and colorful illustrations at almost no cost in
| massive quantities, I'm envisioning the world is about to get
| even more colorful and vibrant.
|
| Counterpoint: The world may get "more colorful and vibrant",
| but also it will go toward uniform styles as AI will take _all_
| the commercial high-paying jobs and leave all but zero
| opportunities for actual artists.
| ghaff wrote:
| I'm not sure it's commercial high-paying so much. If you're
| going to do a rebrand, you're still going to hire an agency
| that will fuss with the tiniest details.
|
| At least initially, the impact will probably be more on the
| type of thing that even a junior designer can do in their
| sleep but is a lot harder for someone who isn't a designer to
| do.
| ghaff wrote:
| Even pretty routine graphics and illustrations for books and
| presentations are pretty hard for the average person to do by
| themselves and there's often no budget to have someone else,
| whether internal or external to a company, to do them. Tools
| have improved a lot in the past few decades but it still takes
| a degree of talent to produce even routine polished work.
| m3kw9 wrote:
| This will supercharge artists, it won't replace them because
| details matter and when you need to get it just right, you cannot
| just keep rolling the dice by "prompting better"
| ghaff wrote:
| I actually find the artwork/design generative AI a lot more
| interesting than the text. (Don't really have an opinion on the
| code generation.) While it's obviously early days, something
| like Stable Diffusion can, with a bit of work, generate artwork
| that neither I (nor I assume the vast bulk of the population)
| could. On the other hand, the text it generates might pass
| muster for content farms or a starting point for a human editor
| familiar with the topic but certainly isn't producing anything
| I could use out of the box.
| worrycue wrote:
| Not everyone requires "just right" though. There are no wrong
| answers in art.
|
| Although in this case, it's an adobe product, the only people
| who will use it are artists.
| bulbosaur123 wrote:
| ControlNet has entered the chat.
| petilon wrote:
| Yes, but by "supercharging" artists, each artist will be able
| to do more, which means fewer artists will be needed.
| lmarcos wrote:
| That artists will be able to do more not necessarily mean
| that fewer of them will ne needed. I bet the opposite
| actually. Companies will want to produce more (with more
| resources) not the same (with less resources).
| int_19h wrote:
| But do the companies actually _need_ more art?
|
| I guess a better way to rephrase that: would it make them
| more profitable to produce more art? Or to keep producing
| the same amount but paying fewer people to do so?
| golergka wrote:
| When something becomes much cheaper than it was before,
| people tend to find much more uses for it. In game
| development, for example, amount of money that could be
| spent on art almost always amount and quality of content;
| if art becomes 10 times cheaper, a typical indie game
| will have 2 times more different pieces of content with 5
| times variants of each.
| ModernMech wrote:
| Maybe, maybe not.
|
| For instance, I'd like to make a game. But I don't have
| enough money to even hire an artist to help with concept art.
| So I don't get to the point where I can raise money off that
| art, and hire an artist to make game assets.
|
| Now I can generate all the concept art I want for free, and I
| can raise money off of that (wallets open faster with pretty
| pictures than with words). What am I going to do with it?
| Hire artists! They will probably be better at using AI art
| generators than I am, and they have the skills to actually
| work with the generated results, unlike me.
| [deleted]
| Riverheart wrote:
| Except now you're competing for the attention and
| disposable income of everybody else doing that. How are
| consumers going to tell your stuff apart from all the other
| AI placeholder games that will flood the market?
| ModernMech wrote:
| I guess what I'm trying to say is that in the course of
| developing anything, one goes through various stages of
| development. Depending on the expertise of the
| individual, they will be able to take a project further
| before bringing on more people. If an idea is well-
| trodden, then it's easy to get people on board without
| much convincing. If an idea is brand new and far out
| there, it will take a lot of work to convince people
| before they get on board.
|
| For someone like me, I can do a lot, but not everything.
| I've managed to get my own project to a stage where I had
| felt I would have to bring on more people to advance it
| much further. But I had lacked the funds to do so, and
| it's hard to get people to do things for free when they
| don't believe in it. It's also hard to get people to
| believe in something without seeing it. My project was
| very likely.
|
| So that's where tools like stable diffusion and chatgpt
| come in. I'm now suddenly unstuck; I have a cheap tool to
| do work I wasn't capable of before, so I can now take the
| project further than I could have otherwise. Whereas
| before I might have abandoned it, now I can take it
| further and maybe get it to the point where I _can_ hire
| people. The question now is: how many projects are now
| going to take off? Is there funding out there for them?
| Can they hire more artists than are displaced?
| PaulHoule wrote:
| There are interfaces where you can not just "prompt better" but
| change the image with a tool like Photoshop and then feed that
| back into the diffusion model.
|
| Also there are people who really don't care about quality.
| There have always been different tiers of art: it's one thing
| to have clip art for a throwaway blog post (royalty free
| stock), it's another thing to base a marketing campaign around
| images (rights managed stock) because you don't to have all
| your competitors choose the same image.
|
| https://cxl.com/blog/stock-photography-vs-real-photos-cant-u...
|
| The bottom feeders will be very happy to accept the problems of
| A.I. art and in fact you might not have the embarrassing
| dogpiling problem that the above blog post describes.
| deelowe wrote:
| I wonder what happens over time as more and more workflows use
| AI generated content as the starting point? Will all images
| slowly start looking the same?
| spoiler wrote:
| Agreed.
|
| I think it will decrease iteration time during an exploration
| phase for new artists or when you're not quite sure what you
| want and want to explore your idea space more quickly (and
| maybe even get new ideas as you iterate).
|
| It's similar with these coding AIs. A lot of the time it's
| great for the "blank page" phase of a project where you have to
| do all the "boring" stuff out of the way. Another great example
| (I think it was a blog post here) where the AI recommended a
| method the author didn't know about that yielded a crazy
| performance boost.
|
| I tried to get some help from copilot on writing a shader the
| other day, and it was really an amazing experience. One still
| needs to have a pretty deep understanding of what needs to be
| done to use these tools.
|
| I imagine it's similar for writers. Maybe they want some help
| to reword a sentence to be more succinct, but the ideas will
| still come from their heads.
|
| I can't predict if this will one day be so sophisticated that
| you can have it do both low level and high level "thinking"
| when exploring ideas, but if that day comes I don't think it
| will mean an end to jobs, just that some jobs will become more
| accessible, or that it won't be so much about nitty-gritty work
| and more about higher level idea-level work (which to me sound
| good, not bad).
|
| Obviously there's people who enjoy the nitty gritty (as a
| developer, and amateur painter who does enjoy _the process_ ),
| and I don't think that will fully go away, just become more of
| a creative/artisan field maybe. Who knows, though? I may be way
| off, but I don't think it's as bleak as some people fear.
| ghaff wrote:
| As a writer, I find ChatGPT can provide some vaguely useful
| scaffolding. On the other hand, I'm not someone who finds a
| blank page especially difficult to start writing on--and, in
| fact, I generally want to start off with something non-
| formulaic to draw a reader in. Still I can see it being
| helpful in a way that automated spellchecking and grammar
| checking is.
| curioussavage wrote:
| Im a generalist who has been trying to kick an obsessive
| habit of perusing tech news and trying new/old tech for 8
| years. You made me realize that may be one reason I feel like
| this tool i have just enough depth in many areas that I'm
| able to scrutinize the output and it's making up for the lack
| of depth all over the place.
| d0100 wrote:
| I hope this means that the next generation of Manga & Comics et
| al will be daily serializations
|
| No more waiting 30 days for the next episode, hurray
| illwrks wrote:
| Unless of course you were to work in that area. If I did, I
| would be terrified.
| O__________O wrote:
| Anyone able to comment on where in their opinion the measure is
| for current state of copyright law when generative AI is a subset
| of an image?
| Spivak wrote:
| It's currently unknown, but copyright law is really political
| in nature and companies are hopping on using AI like crazy and
| delivering real value so my expectation is that it will be
| granted fair use for no philosophical reason but because US
| legislators don't want to put their boot-heel on American
| business.
|
| Because if they stifle this is will basically cement China is
| the world's AI powerhouse who will give zero shits about
| copyright.
|
| This is gonna be interesting times for copyright because this
| is the first time copyrighted works are actually useful in
| building tools. I think it's a very neat real-world example of
| how universities are actually right to make engineers take
| humanities courses because your code writing AI is actually
| better for having read Vonnegut.
| nstj wrote:
| ``` Copyright Registration Guidance: Works Containing Material
| Generated by Artificial Intelligence
|
| A Rule by the Copyright Office, Library of Congress on
| 03/16/2023 ```
|
| https://www.federalregister.gov/documents/2023/03/16/2023-05...
| O__________O wrote:
| Aware of the ruling, reviewed it when it was released, though
| it does not appears to cover any aspect related to for
| example layout, color select, etc -- and to me targets one-
| shot generative art; poorly so at that.
|
| As is, landscape photographers for example, control camera
| angle, timing of photograph, camera type, lens type, etc --
| but they rarely create the landscape itself or for that
| matter the equipment and related technologies.
|
| Even "found object" art is covered by copyright:
|
| https://wikipedia.org/wiki/Found_object
|
| At this point, to me, it's unclear author of that ruling even
| understands technology used to create the outputs that were
| the subject of that ruling.
| superbatfish wrote:
| There are at least two potential issues pertaining to copyright
| law here, and it's not clear which one you're asking about.
| That's why the responses you're getting here seem to be
| answering different questions.
|
| 1. Are the AI systems violating the copyright protections of
| the images they were trained on? If so, are users of such AI
| systems also in violation of those copyrights when they create
| works derived from those training images?
|
| Answer: That's not yet settled.
|
| 2. If you make an image with an AI system, is your new image
| eligible for copyright protection, or is it ineligible due to
| the "human authorship requirement"?
|
| Answer: The US Copyright Office recently wrote[1] that your
| work is eligible for copyright if you altered it afterwards in
| a meaningful way. Here's a quote:
|
| >When an AI technology determines the expressive elements of
| its output, the generated material is not the product of human
| authorship. As a result, that material is not protected by
| copyright and must be disclaimed in a registration application.
|
| >In other cases, however, a work containing AI-generated
| material will also contain sufficient human authorship to
| support a copyright claim. For example, a human may select or
| arrange AI-generated material in a sufficiently creative way
| that "the resulting work as a whole constitutes an original
| work of authorship." Or an artist may modify material
| originally generated by AI technology to such a degree that the
| modifications meet the standard for copyright protection.
|
| [1]:
| https://www.federalregister.gov/documents/2023/03/16/2023-05...
| O__________O wrote:
| As I am sure you're aware, already posted response US
| Copyright's ruling related to authorship here, so will not be
| repeating myself:
|
| https://news.ycombinator.com/item?id=35247377
|
| Will say that post you linked to also states, "17 U.S.C. 101
| (definition of "compilation"). In the case of a compilation
| including AI-generated material, the computer-generated
| material will not be protected outside of the compilation."
| -- the problem is that unlike say for example a compilation
| of recipes, where the individual recipes are not protected,
| but the compilation is, there is no clear delineation within
| a singular work of art such delineation. As such, injecting
| such delineations is counterproductive and shows no
| understanding of the nature and spirit of the rule of law.
| Further, while their opinion appears to be a prompt is
| somehow a recipe and not a novel expression that merits
| copyright, clearly photographs of the output of a recipes are
| commonly photographed and given copyright protection.
|
| Sure others have made far more compelling arguments against
| the ruling, but to me, the ruling lacks merit as is.
| danShumway wrote:
| > a prompt is somehow a recipe and not a novel expression
| that merits copyright
|
| People keep bringing up photographs, I think the better
| analogy is commissions. And in fact, the copyright office
| points towards commissions in its explanation of its
| policy.
|
| Under current copyright law, if I work with an artist to
| produce a commission by giving that artist repeated
| prompts, pointing out areas in the image I'd like changed,
| etc... I don't have any claim of copyright on the artist's
| final product unless they sign that copyright over to me.
| My artist "prompts" are not treated as creative input for
| the purpose of copyright.
|
| I would love to hear an argument for why prompting stable
| diffusion should grant copyright over the final image, but
| prompting a human being doesn't grant copyright over the
| final image. Directing an artist is just as much work as
| directing an AI, and in many ways will put you much closer
| to the creative process and will give you more control over
| the final product. You can direct an artist in much more
| specific detail than you can direct stable diffusion. You
| can be a lot more involved in the creative process with a
| human artist. And just like with an AI, if you take that
| artist's final drawing and do your own work on top of it,
| you can still end up with something that's covered by
| copyright.
|
| But despite that, we've never assumed you intrinsically get
| any copyright claim over the artist's final picture that
| they give you.
|
| So the "prompt as a recipe" analogy seems to hold up pretty
| well for both AI generators and human "generators". All of
| the same questions and tests seem to apply to both
| scenarios, which makes me feel like the copyright office's
| conclusion is pretty reasonable: prompt-generated art isn't
| copyrightable, but prompts may be protected in some way,
| and of course additional modifications can still be
| covered.
|
| Yes, there's grey area, but no more grey area than already
| exists in commissioning, and the creative industry has been
| just fine with those grey areas in commissioning for a long
| time; they haven't been that big of a deal.
| ClumsyPilot wrote:
| > a prompt is somehow a recipe and not a novel expression
| that merits copyright
|
| Is it not? Does typing in 'cat' in SD, as millions of
| people will, count as novel expression?
| ghaff wrote:
| Basically, no one knows. But the IP lawyers I know are
| generally of the opinion that, manufactured possible violations
| notwithstanding, it's probably OK for the most part.
| linuxftw wrote:
| What I love about this AI generative art is it will finally put
| the right price on computer generated art: $0.
|
| It will be interesting to watch the entire Hollywood and
| associated creative industries lose control to AI. Entire movies
| will be created in small basements. Same with AAA video games.
| Vespasian wrote:
| I hope we will see much bigger universes with intricate and
| detailed lore where human steers parts of storylines and
| visuals to make them interesting and fit together but AI fills
| in the blanks.
|
| Gamedev example: If NPCs lines can be generated quickly, maybe
| it's possible to develop open world games that change their
| character throughout the game.
|
| Way to often addons and expansions are carefully cordoned off
| from the main game because no one wants to redo all that work.
| linuxftw wrote:
| Will any of these meaningfully enhance gameplay? Sure, there
| could be more features, but what is the marginal utility? I
| think people assume more immersive, more expansive is better
| for games, but I'm not sure this is the case.
| Vespasian wrote:
| Good point.
|
| I was thinking of enhancing existing capabilities to
| develop open world games of smaller teams or to buff out a
| main storyline with "world chatter".
|
| This would act as a multiplier for writers to better use
| the available budget/time. Sure it may not write a
| brilliant and engaging story (without human editing) but
| given the "lore" of a village and it's
| geographical/political position in the world I can
| definitely see it being useful to "set the tone" of
| otherwise generic background NPCs.
|
| PS: Maybe this is me trying to find use cases for current
| LLMs (with their known capabilities and weaknesses) that
| don't involve dismissing them out of hand or "the
| singularity".
| Nevermark wrote:
| Dungeons & Dragons demonstrated that intelligent open world
| is so attractive people will crank through it with all the
| friction of paper character sheets, rule books,
| encyclopedias of creature stats, dice, dungeon masters prep
| and problem solving...
|
| I would love to collect a group of human and NPC players
| and attempt a heist from an actually intelligent dragon in
| an environment where no action was guardrailed
|
| Encounter creatures and cultures with no documentation but
| what you learn my interacting with them
|
| And with beautiful scenery to boot
| Pxtl wrote:
| I personally find it infuriating that the jobs we're closest to
| automating right now are the ones that we dreamed of doing as
| youths.
|
| Who dreamed of cleaning bathrooms or flipping burgers? Too bad,
| that's still done by manual humans.
|
| But who dreamed of being an artist or a writer? Great, we've
| figured out how to replace you with a generative algorithm!
| usrusr wrote:
| Not just dreamed of doing as youths, they are also exactly
| the jobs that AI positivists promised we would be doing
| instead when robots took away all the boring "make rent"
| crap.
| bufferoverflow wrote:
| And digital art in general. If you can't tell the difference
| between human made and AI made art, they will cost the same.
| Near $0.
| TylerE wrote:
| Creative people are paid to be creative, not enter keyboard
| shortcuts all day. It's the idea and vision that are the real
| differentiator.
| usrusr wrote:
| Kids at street corners have idea and vision, no shortage in
| that. What matters is putting together idea and vision with
| execution. And for anything that is beyond basement scope,
| execution further subdivides into craft and access. Access to
| the means, and that is true for Hollywood as much as it is
| true for the smallest-time painter who might not be good at
| making friends with gallerists. The outliers are those that
| learn the craft, network into a position of access and still
| retain _some_ trace of idea and vision through all of that.
| xsmasher wrote:
| Kids at street corners have idea and vision, but may lack
| TASTE - the ability to discern the good from the bad. If
| the button-pushers have great AI but lack taste they will
| still produce a terrible end product.
| TylerE wrote:
| Kids on corners have ideas. Very few of them are GOOD
| ideas. Anyone can say 'Make a movie with lots of aliens and
| lasers'. It still takes Ridley Scott to make _Alien_.
| usrusr wrote:
| Yet at the same time that "Pinback chased by the beach
| ball monster" scene that eventually evolved into _Alien_
| is hilariously deep in "kids on street corner ideas"
| territory
| easyThrowaway wrote:
| They won't. They own the hardware, the distribution channels
| and the datacenters.
|
| For comparison, despite making professional music is even
| easier (a copy of Ableton Live Lite and a few hours of studio
| recording for the vocals makes for less than 300$) every single
| music chart is still dominated by music made by corporations
| (Universal Sony / Time Warner, mainly).
|
| On the other hand, music is less valuable than ever. From
| 20$/unit (the price of a CD) to 0.0004c for stream. Or you get
| lucky and somehow someone buys your music during bandcamp
| friday.
|
| Just a subset of Musicians are still around because they're
| famous enough to get an audience for their tours and/or dj
| sets. Visual artists have nothing comparable to sustain
| themselves.
| irrational wrote:
| When is Getty Images going to release their own AI Art Generator?
| user3939382 wrote:
| Adobe has lost any good sentiment I had for them with their
| forced subscriptions and dark patterns. I use their products
| begrudgingly, it's a sunk cost fallacy from decades of muscle
| memory.
|
| My reaction to seeing any announcement from them is, yeah
| whatever Adobe.
| aceazzameen wrote:
| Agreed. Adobe has lost my complete trust and I've learned to
| avoid their tools. I honestly don't know what they can do to
| win me back. They're a damaged brand.
| roflyear wrote:
| Agreed. Greedy company, no longer associated with cool
| creators. I hope people start to move away from their products.
| pcurve wrote:
| Once you get stranglehold of the market place, you can pretty
| much do whatever you want. Adobe... Microsoft... Apple...
| Google... all pulling the same lever.
| roflyear wrote:
| At least with all of those (except Adobe!!) you can cancel
| your subscription using the same interface you bought the
| subscription on!
| danShumway wrote:
| > Once Firefly is out of its beta stage, creators will have the
| option to use content generated in Firefly commercially.
|
| The reason this sentence exists is because Adobe wants to create
| the impression among readers that it owns the output and that
| it's _Adobe 's_ choice how creators use those images. But under
| current copyright interpretation, Adobe doesn't own those images.
| So it's nice that it's giving permission, but that's not Adobe's
| permission to give -- so thanks but also heck off Adobe, nobody
| needed to ask you for permission in the first place. You can use
| any AI image commercially because AI images are not under
| copyright.
|
| Of course, Adobe would _love_ to have a world where most art is
| generated algorithmically and Adobe is in charge of deciding how
| that art gets used and what gets generated because it controls
| the tool. So it 's in Adobe's best interest to pretend that it's
| granting artists a permissive right, rather than recognizing that
| it doesn't have any real legal argument to make that artwork
| generated through Firefly is owned by Adobe (or by anyone for
| that matter).
|
| And that's good! It's not anti-AI to say that, because what you
| have to realize is that what companies want from AI image
| generation is a model where every single artist goes through them
| in order to build or generate anything. They want a model where
| creative tools are a _service_ , for the same reason why Adobe
| wants its tools to all be subscription based. No SaaS company is
| getting into generative AI with the goal of _increasing_
| accessibility of art. They are (Adobe especially) interested in
| closing down that accessibility. They are all drooling at the
| opportunity to turn your workflow into a SaaS business that can
| only be run on extremely expensive hardware clusters.
|
| So yes, the denial of copyright for AI-generated images does make
| it trickier to monetize those images, but denying that copyright
| has the much more important effect of making harder for these
| companies to lock out competitors and build services where they
| control/monopolize an entire creative market. You can still use
| AI during a creative process and end up with a thing that can be
| copyrighted. But Adobe can't release a tool and later on start to
| argue that nobody else can train competing generators using that
| tool, or that the tool can only be used in a particular way, or
| that everything the tool can generate is owned by Adobe. That
| matters.
|
| It means that competitors can use Adobe Firefly output to train
| their own models (including locally run models like stable
| diffusion). It means that there's a limiting factor in place that
| keeps Adobe from making lazy grabs to assert ownership over large
| numbers of images. It means that you can pull images generated by
| Firefly into other pipelines without asking Adobe permission.
|
| You can see the same thing playing out with ChatGPT. OpenAI's TOS
| states that you're not allowed to use OpenAI to help build
| something that competes with OpenAI. That's going largely
| unquestioned, but my strong suspicion is that it's only a TOS
| violation to break that rule, because again, OpenAI does not own
| the copyright on anything that GPT generates. So if you're not
| signing that EULA, it's not clear to me that OpenAI has any legal
| right at all to restrict you from using output that you find
| online as training data. As far as I can tell, current copyright
| consensus in the US is that the text that comes out of ChatGPT is
| public domain. But that's not what OpenAI wants, because if
| anyone can build anything using ChatGPT's output, then how is
| OpenAI going to build a moat around their service to block
| competitors? How are they going to eventually turn profitable by
| closing off access and raising prices once people start to rely
| on their service? So just like Adobe, they stick the language in
| and hope nobody calls them out on it.
| aschearer wrote:
| While this is very exciting it's worth pointing out most of the
| page falls under "WHAT WE'RE EXPLORING." That is to say, these
| are from the marketing department and it's impossible to know
| whether they are strictly aspirational or just around the corner.
|
| From the page:
|
| > Looking forward, Firefly has the potential to do much, much
| more. Explore the possibilities below ... We plan to build ...
| We're exploring the potential of ... the plan is to do this and
| more ... in the future, we hope to enable.
|
| All that being said, shut-up-and-take-my-money.jpg!
| sebzim4500 wrote:
| They do a highly rehearsed live demo here:
| https://www.youtube.com/watch?v=c3z9jYtPx-4
|
| It's mainly just text to image, but the results are extremely
| impressive IMO. Probably best in class for a lot of usecases.
| mesh wrote:
| Yeah, we are being a little more open, a little earlier, in
| part because this space is so new, and we really want feedback
| / direction from the community.
|
| Currently in the beta, there is text to image, and text
| effects, and will have vector recoloring in the next couple of
| weeks.
| aschearer wrote:
| Keep up the good work. I raise my point solely because people
| are comparing these trailers against Stable Diffusion and the
| like, when in reality the examples are artists renditions.
| There's no point in comparing.
|
| I hope someday y'all get to the point where we can change a
| video's season, time of day, etc. and have things work
| seamlessly. That would be quite incredible!
| RcouF1uZ4gsC wrote:
| One thing that disturbs me is the push to censor the AI output.
|
| Photoshop is used to produce a ton of porn, but Adobe doesn't try
| to stop that.
|
| In terms of "safety", models Photoshopped to impossible standards
| have helped create impossible beauty standards causing depression
| and even life threatening eating disorders in teenage girls.
|
| Yet, Adobe and all the others will carefully censor the output of
| these generative models.
| zirgs wrote:
| Photoshop runs on the user's machine. This AI is running on
| Adobe's servers - that's the difference.
| postsantum wrote:
| That would be the job of AI ethicists to ensure models
| generated to be all across the BMI spectrum
| bestcoder69 wrote:
| Any relation to GigaGAN? Page doesn't seem to mention what kind
| of NNs are used.
| mk_stjames wrote:
| I found the little sketch-to-vector-variations part interesting
| and surprising- this is something that is solidly not done via a
| diffusion model unlike everything else shown. Although I note it
| says "We plan to build this into Firefly" implying that... this
| isn't something already finished.
| Bjorkbat wrote:
| I'm intrigued as well, but especially with regards to how it
| would perform in the real world given that I've also observed
| that diffusion models aren't great with vectors.
|
| I suspect the example they used might have been cherry-picked
| alexwebb2 wrote:
| I imagine this would use "in the style of a line drawing"
| prompts under the hood to produce line-esque raster images
| suitable for vectorization, with the resulting vectorized
| images being what's shown to the user.
| mk_stjames wrote:
| That was also my first instinct, but is vectorization from a
| line sketch really that smooth and reliable now, though? It
| has been some time since I've used modern tools, but last I
| tried any raster-to-vector on line drawings that weren't
| super basic, the results left a lot to be desired. Jittery,
| under- or over-fit, etc.
| oidar wrote:
| The high compute costs for training and legacy software foothold
| seems that it may encourage rent seeking in the big tech
| companies. I hope the huge costs and compute demands won't only
| enrich the big tech companies that can afford to run the models
| on heavy duty hardware. If it does, it will potentially lead to
| the rich companies, individuals, and persons greatly outpacing
| their peers.
| brucethemoose2 wrote:
| These models _can_ be run locally, but I see far too many
| businesses and bloggers resign themselves to renting OpenAI 's
| cloud instead of tinkering with LLaMA/Alpaca, the community
| Stable Diffusion finetunes and such.
|
| Adobe does have a legal edge here, which is interesting and
| perhaps actually worth a subscription if needed.
| int_19h wrote:
| The problem is that GPT-3.5 is so cheap, and the results that
| you get out of it are still quite a bit better than
| LLaMA/Alpaca. There just doesn't seem to be any solid
| economic reason to run it locally except to keep inputs
| and/or outputs private.
| brucethemoose2 wrote:
| That is not sustainable though. OpenAI is in the "burning
| money to gain market share" phase.
| LegitShady wrote:
| I wonder about how companies are approaching letting their
| customers understand what is and isn't copyrightable with the
| results of these ai models.
|
| The us copyright office has made it clear that elements of
| designs that lack human authorship cannot be copyrighted. A
| prompt is not enough - the same way if you provided a prompt to
| an artist while commissioning a work does not grant you copyright
| to that work. The results (including specific elements) of these
| ai models cannot be copyrighted which has extreme implications to
| commerical art in general. If you have your ai models come up
| with a character you cannot claim copyright of that character. If
| your ai models comes up with a composition you cannot claim
| copyright on that composition.
|
| If you use the ai to generate an idea and then have a human
| develop the idea, the elements of the design that the ai came up
| with cannot be copyrighted because they lack human authorship.
|
| Commercial operations that care about ip beware.
| stravant wrote:
| I don't see how this will possibly be relevant.
|
| AI generated content is about to sprint so far ahead of the
| existing legal framework that something will just have to give.
| LegitShady wrote:
| on the opposite - existing legal frameworks protect human
| works, and altering them to allow corporations to own any
| idea they can provide a prompt for is directly harmful to
| society and the future of intellectual property in every
| country.
|
| using ml is already powerful and fast. protecting ideas from
| mass corporate ownership so that your grandchildren will be
| allowed to think freely is more important than chat gpt or
| stable diffusion or whatever algorithm replaces them next
| week.
| s1k3s wrote:
| Wow, what a waste of time for Adobe to implement something like
| this. Who's even going to use this since all the artists said AI
| is bad and everyone who uses it should be put in prison?
|
| Sorry, couldn't resist.
| codetrotter wrote:
| Imagine paying for Adobe products.
|
| I did that once. Never again. What a shit company. Cancelling the
| subscription was a total drag. Fuck you Adobe, I hope you go
| bankrupt sooner rather than later.
| acomjean wrote:
| Lamentably adding their training set will make Adobe's value
| proposition much higher.
|
| Open source creative alternatives have a even harder time
| (Blender, inkscape, krita, Gnu Image Manipulation Program)...
|
| Since Adobe lack of Linux support holds it back significantly
| with creatives, this makes opens source more of a challenge.
| CyanBird wrote:
| What they did to allegorithmic, the substance suite and the on
| boarding process to even learn the system is a travesty
|
| Substance Designer used to have a platform called substance
| share where anyone could share and open source knowledge of
| complex parametric textures for free to anyone with a substance
| account, obviously the first draft for how Adobe would get the
| money back from their purchase is to monetize the entire
| learning processes by shutting down that website and adding pay
| walls to near every single interaction to even learn the
| software, it is just so horribly shortsighted and these ideas
| can only be approved and implemented by sheer rentiers managers
| nvr219 wrote:
| https://sniep.net/adobe.png
| bitL wrote:
| Generative AI with ClipSeg/ControlNet is the main way to make
| Adobe products obsolete. No wonder they are pushing it into
| production ASAP but their investments to advanced tools they had
| edge over competition like e.g. intelligent background filling
| can be easily replicated/overcome now. We might see a quick
| commoditization of Adobe image processing tools.
| sebzim4500 wrote:
| Yeah but Adobe will presumably outspend
| stability.ai/midjourney/the stable diffusion community 10 to 1,
| plus they have access to better datasets. I think if they play
| this right this could end up building them a moat rather than
| filling it in.
| saberience wrote:
| Yes but Adobe tools are both expensive and their subscription
| system and website are full of horrible dark patterns.
| There's a reason Figma users were upset and pissed to hear
| Adobe was buying them. Adobe used to be a loved company, but
| this was many many years ago at this point.
| open-paren wrote:
| Adobe employee, not in Creative Cloud.
|
| We got access to this in beta a week ago and it was an instant
| hit across the whole company.
|
| This is just the tip of the iceberg and there is a lot of really
| cool, in-house products around generative AI. The team is going
| to great lengths to make this ethical and fair (try and generate
| a photo of a copyrighted character like Hello Kitty or Darth
| Vader). I'm excited to see the final product of all the internal
| work that has been going on for so long.
| ftxbro wrote:
| > The team is going to great lengths to make this ethical and
| fair (try and generate a photo of a copyrighted character like
| Hello Kitty or Darth Vader).
|
| imagine doing something as unethical as drawing hello kitty
| worrycue wrote:
| What if you need to generate a picture of Hello Kitty for an
| article someone is writing about the art style of Hello Kitty
| or something? I.E. Fair Use cases.
| skybrian wrote:
| Copy it? Use a different image generator?
|
| This is just one tool. It doesn't need to be fully general.
| airstrike wrote:
| _> The team is going to great lengths to make this ethical and
| fair (try and generate a photo of a copyrighted character like
| Hello Kitty or Darth Vader)_
|
| are you saying it won't work? if that's the case, that seems
| really silly. actually, it goes against everything I believe in
| (as well as my understanding of even the kindest meaning of the
| word "hacker"). it drives me up the wall, it makes my blood
| boil
|
| who is going to stop me from drawing hello kitty myself?
|
| it's not the tool's job to regulate my creativity. the law
| exists to regulate the use of my art, not the act of creating
| the art. I can draw hello kitty all I want and leave it in my
| drawer, if it floats my boat
|
| limiting the tool just makes me never want to use it. you're
| like Sony fighting digital music in the 2000s. the future is
| right in front of you but you just can't see it.
| [d8][d78][d78][d7]
| [d678])g7[d378][d8][d678][d3678]0[d2368][d7]
| [d68][d1234567]cccg77633[d2345678]f
| [d8][d78][d14578][d1234678][d17] [d12345678]
| [d4568][d1237] [d4568][d12347]
| [d123468][d345678]fcd[d23568]tg][d2378]
| [d4568][d123678]2 ][d23678][d3678][d367][d1234568][d7]
| [d8][d345678][d1235678] [d1345678] [d345678]b
| cd[d12348][d147]a[d678][d234567][d12378]
| [d568][d12347] dfc [d124568][d7]
| [d78][d678][d345678][d123678][d3678] [d78][d7]
| [d78][d7] "6[d124568][d123457]h3
| @[d1235678][d78][d7] ^=l [d8][d78][d78] =q
| 4[d2345678]8[d368] fd[d123678][d78][d7] ^8#b
| [d368][d345678][d123478]
| [d8]0j][d12368][d378][d78] [d78][d3678]tcch
| @cgg777[d235678][d3678][d35678][d23567]77qgc
| emptybits wrote:
| > "it's not the tool's job to regulate my creativity. the law
| exists to regulate the use of my art, not the act of creating
| the art. I can draw hello kitty all I want and leave it in my
| drawer, if it floats my boat"
|
| This is very well said. Thank you!
| balls187 wrote:
| This runs into the core problem with technology--we answer
| "What can we do" before "Should we do it" and "What are the
| impacts"
|
| Let's say you take your hello kitty dot art, and make a
| poster promoting a commercial event. You then take it to
| FedEx Kinkos and use a self-service copy machine to make 1000
| copies. You could reasonable argue that you are violation of
| copyright infringement, and the photocopier / FedEx kinkos
| isn't.
|
| Now instead, you have AI generate a poster, and it generates
| a very similar image to hello-kitty. It's arguably so similar
| than a reasonable person would say it's a copy. You take that
| poster and again make 1000 copies. Is there copyright
| infringement? If so, who if anyone, is liable for damages?
| airstrike wrote:
| Whoever put the poster up for display and reaped some
| reward out of it is liable for damages. Everyone else is
| just doing their job in the supply chain. We want supply
| chains to work for the good of the economy, which is a
| proxy for increasing availability and reducing prices of
| "goods and services" to the average person.
| balls187 wrote:
| Imagine a paying Adobe CC customer.
|
| They use Firefly to generate a poster, and unbeknownst to
| them, the image it generated is a reasonable facsimile of
| a copyrighted/trademark character.
|
| The person has inadvertently committed copyright
| infringement.
|
| So does Firefly need to come with a warning?
|
| The safer solution, to the chagrin of another commenter,
| is for Adobe to neuter the tool by only training on data
| in which Adobe has express permission to use.
| whatarethembits wrote:
| A simple warning that what's been generated looks similar
| to something that's copyrighted is not a bad idea. Then
| it's up to the AI user to do their due diligence if they
| intend to use the resulting work for commercial purpose.
| Neutering the tool from the get go is a step too far.
| codeyperson wrote:
| People accidentally recreate other companies logos in
| Adobe Illustrator all the time.
| airstrike wrote:
| Surely with all our contemporary AI prowess we can train
| a model that identifies "reasonable facsimiles of
| copyrighted/trademark characters" after generating them
| and alert the user that it could be argued as such.
| Still, let the user decide.
|
| We _do not_ need creative technology to regulate
| observance of copyright law.
|
| (By the way I think the chagrined other commenter was
| yours truly ;-))
| freedomben wrote:
| GP works for Adobe, and Adobe's bread and butter are the
| professional creators who would love a world where there is
| hardware DRM on your eyes and you can't even _see_ their
| creations or a likeness of them without paying a royalty (and
| one to "rent" the memories of the visualization, not to
| "own" the memories like we do now). While I largely agree
| with you, the GP post is exactly what I would expect from an
| Adobe person.
| vdfs wrote:
| I forgot for a second that this Adobe, the top stories on
| HN about Adobe are almost all negative
| unreal37 wrote:
| There will be open source tools replicating this within
| months. You can build your own model based on billions of
| images on the web or use someone else's or contribute to one.
| danShumway wrote:
| To expand on this, what we're seeing with LLaMa is that you
| can fine-tune your model _using other models_.
|
| It's not clear that the quality will be exactly the same
| (in fact it will very likely be worse), but working
| generators are essentially ways to quickly generate
| training data. And I can't think of a legal argument for
| why generated output from a model would be _less_ legal to
| use as training data than an unlicensed photo off of
| DeviantArt.
|
| Nobody has really called out OpenAI on this, but OpenAI has
| a clause in it's TOS that you won't use output to build a
| competing model. But that's... just in it's TOS. If you
| don't have an OpenAI account, it's not immediately clear to
| me (IANAL) why you can't use any of the leaked training
| sets that other people have generated with ChatGPT to help
| align a commercial model.
|
| Certainly if someone makes the argument that generators
| like Copilot/Midjourney aren't violating copyright by
| learning from their sources, it's very hard to make the
| argument that Midjourney/Copilot output is somehow
| different than that and their output can't be used to help
| generate training datasets.
| HeavyFeather wrote:
| I hate limitations as much as the next person, but these
| tools are viewed as generators _by company xyz._ You don't
| want Disney to sue Adobe because the tool can circumvent IP
| and abuse it.
|
| "Draw a Disney logo but for adults"
|
| That image now lives on Adobe.com
| airstrike wrote:
| What if I draw the logo with a regular 2B pencil?
|
| I want to see Disney sue Faber-Castell for making great
| pencils I used for my deviant art
|
| Also IANAL but even then there's probably fair use rights
| in parodying their logo
| dragonwriter wrote:
| > You don't want Disney to sue Adobe
|
| No, _you_ don't want Disney to sue Adobe.
| dragonwriter wrote:
| So, the above was somewhat flip and terse, but the kind
| of lawsuit being avoided is also the kind of thing that
| provides clarity on legal issues and removes spaces of
| doubt. This can be broadly beneficial.
|
| Giants battling it out can result in a clearer
| environment for everyone else that couldn't afford legal
| risk in an environment of doubt.
| roflyear wrote:
| I know why, but why do you guys make your subscription
| management such an awful experience for users? I used to like
| Adobe, now I hate the company and will go as far as suffering
| massive inconvenience to avoid Adobe products.
|
| Last time I canceled a subscription (can't do it through your
| website, only by talking to support) when I finally got in
| touch with someone it took several hours to actually convince
| them to cancel.
| Filligree wrote:
| One of the great aspects of open-source stable diffusion
| (civitai.com et al.) is there's a model for every purpose.
|
| Does your inpainting model work with _every_ style? Or is it
| going to have trouble matching the content for e.g. specific
| fanart?
| jdc0589 wrote:
| is civitai.com literally just 90% japanese porn?
| mesh wrote:
| It would have trouble matching on trademarked styles, or
| individual artists / creators styles.
|
| One of the primary goals for Firefly is to provide a model
| that can generate output that respects creators and is safe
| for commercial use.
|
| (I work for Adobe)
| Filligree wrote:
| So that means it would have trouble matching my style, too.
| mesh wrote:
| Yes. Although we are working on allowing you to train on
| your own content.
| saberience wrote:
| Sounds like a good way of making it useless or otherwise
| 100X less useful than Stable Diffusion.
| klabb3 wrote:
| I understand the intent but the result will clearly sway in
| the direction of protecting big brands, artists, and
| individual styles. There's simply no way that it couldn't.
| At some point in the pipeline, there's a blocklist for
| copyrighted works of a finite size that's decided by
| employees, no?
| krsdcbl wrote:
| I don't really understand the negative comments on this.
| Though a hacker by heart, I'm a designer first and foremost
|
| And I'm extremely eager to get my hands on AI tools that
| let me extend my capacities based on _my own_ styles &
| context, and that is focused enough on this scope to evade
| future legal obstacles when used in production
|
| Very excited to try this tool!
| iddan wrote:
| Finally a big player is talking about image to vector using
| generative AI. This will make the lives of graphic designers so
| much better. No reason that humans will continue to trace images
| in this day and age
| nuc1e0n wrote:
| Man, Adobe knows what they're doing. This is the right response
| to image generative AI, to integrate it into workflows.
| [deleted]
| turnsout wrote:
| How long do we think this "big company gates access to an
| impressive AI model" moment will last?
|
| I wonder if generative models will become such a commodity that
| they cease to be a revenue driver or differentiator.
| layer8 wrote:
| There will be moats around training data (Adobe probably has
| huge amounts of high-quality training data that isn't available
| to the public), and around fine-tuning for specific fields. The
| more specialized the application, the less of a commodity it
| will be.
|
| And it remains to be seen how long it will take until there is
| an open-source model on the level of GPT-4. It may be harder
| than many expect, and the commercial offerings may be on yet
| another level by then.
| Someone1234 wrote:
| Exactly, and the moats largely boil down to: "Why is there no
| Open Source replacing for Google Search?"
|
| Big AI's advantage is still being made. Every time a user
| hits "Feedback" they're fine-tuning this proprietary model.
| You can absolutely make an Open Source model with enough
| compute, but if it is only e.g. "95%" good relative to the
| paid one at "97%" or "98%" are you going to prioritize it?
| How many of you use Google because it is 2% better than Bing
| for example?
| turnsout wrote:
| If it's 95% as good and it can be rolled into every product
| at basically zero cost, then yeah, the paid version will
| die.
|
| There are plenty of proprietary technologies that are
| better than say Postgres, or JPEG, or JSON. And some
| businesses need that marginal edge. But if the open source
| option is free, standardized, has great tooling, and is 95%
| as good, that's a real problem for OpenAI and their peers.
| epups wrote:
| It doesn't seem to do be able to do anything that Stable
| Diffusion cannot do, and I bet they put a ton more restrictions
| too. Other than OpenAI, most of these closed-source AI
| developments are quite underwhelming.
| mesh wrote:
| Not specifically in comparison to Stable Diffusion, but in
| general our approach is to provide a model that is really easy
| to use, can be used commercially, and has deep integration with
| our our creative tools.
|
| On this last point, we have really only shared about the model
| and "playground" which is a web based interface to play around
| with Firefly. We are working on initial integrations in the
| tools, and will have more info on that in the coming weeks /
| months (particularly for Adobe Express and Photoshop).
|
| While you can do a LOT with plugins in tools like Photoshop
| (which may be enough for some users), we can do much deeper
| integration into the tool, and integrate it in ways not
| otherwise possible.
|
| (I work for Adobe)
| EZ-Cheeze wrote:
| Did you see how it accurately put the snow on the surfaces
| where it would accumulate? I don't know how to do that with SD,
| but I'll try it later with ControlNet.
| robg wrote:
| Is this all Adobe developed or they are relying on partners?
| rcarmo wrote:
| Obviously requires an Adobe account, which hints at it being
| folded into their subscription pricing. I wonder if this is a
| customer retention move given the number of third-party plugins
| to use Dall-E, SD, Midjourney, etc. with Photoshop.
|
| (I also wonder who is powering this or if it's an in-house
| solution -- that should be telling...)
|
| Edit: Why, they've trained this on Adobe stock images. OK, this
| may be very interesting for publications worried about copyright.
| olejorgenb wrote:
| https://news.ycombinator.com/item?id=35247630 points to
| https://news.ycombinator.com/item?id=35089661 from Adobe
| Research. Maybe it's based on that?
| joelfried wrote:
| Their blog about their methodology[1] implies to me it's an in-
| house solution. They also talk at length about maintaining
| provenance, ethics, and transparency. I found it a much more
| informative read than the product announcement.
|
| [1] https://blog.adobe.com/en/publish/2023/03/21/responsible-
| inn...
| sourcecodeplz wrote:
| Actually, when signing up for the waitlist, they don't ask for
| your Adobe account.
| biccboii wrote:
| it's only a matter of time i'm sure. I don't think you can
| use any adobe product today without Adobe Creative Cloud(tm)
| ghaff wrote:
| There are a variety of consumer products (and Adobe Reader)
| that are still non-subscription. There also subsets of
| Creative Cloud. But, yes, this will presumably be part of
| their subscriptions at some point.
| KineticLensman wrote:
| > I don't think you can use any adobe product today without
| Adobe Creative Cloud
|
| The Elements versions of Photoshop and Premiere are paid
| for with a one-off purchase and are not part of a cloud
| subscription.
|
| I use Creative Cloud photoshop but a while ago purchased a
| separate Premiere Elements license for video editing - this
| was cheaper than extending my CC subscription to include
| Premiere Pro. But I switched to (the awesome) Davinci
| Resolve for video editing when my copy of Premiere elements
| wasn't able to open video clips from just-released cameras
| and phones.
| UberFly wrote:
| Adobe will need to allow the importing of custom models, or else
| their product will be too limited. That then allows the
| "unofficial" use of copyrighted material for image generation.
| They're definitely starting from a place of advantage leveraging
| their already vast open-license stock portfolios.
| 29athrowaway wrote:
| If you use oh-my-zsh, enable the zsh plugin (after installing
| zsh).
| whywhywhywhy wrote:
| It's a trap. They already have you locked in for photo and video
| work, AI is the chance to escape them.
| inductive_magic wrote:
| Photopea considered, is PS still as prevalent?
| [deleted]
| ben174 wrote:
| Closed beta. Requesting access requires filling out a four page
| form. Product page is a bunch of hand-selected images.
|
| Nothing to see here.
| mesh wrote:
| You can see it in action here:
|
| https://www.youtube.com/watch?v=BkTXyY9cnEs
|
| and there is a live stream later today here:
|
| https://www.youtube.com/watch?v=c3z9jYtPx-4
| vecinu wrote:
| Did you bother to browse the entire page? There are videos
| showing live editing and changing a backdrop. This is a teaser
| of a live product that is actually working remarkably well.
| germinalphrase wrote:
| Can anyone speak to company culture/QoL of working at Adobe?
| dharmab wrote:
| I worked for 5 years at Adobe on an infrastructure team. My
| team and management was amazing and it was a genuinely good
| workplace. But it is a very large company and many of the usual
| tradeoffs if large companies still apply (eg small cog, big
| inscrutable machine). People's experiences varied across teams
| and I seemed to be on one of the better ones at the company.
| endisneigh wrote:
| Looks pretty polished honestly. I wonder who they're partnering
| with for the GPUs. Does Adobe have their own data centers?
| dharmab wrote:
| Yes, Adobe has their own datacenters as well as very large
| infrastructure on both AWS and Azure.
|
| Source: I was on Adobe's container infrastructure team
| 2017-2021, most of that time as a lead.
| lancesells wrote:
| Adobe is an ad network, stock house, analytics, etc. They
| probably have the resources to do this.
| motoxpro wrote:
| Interesting that the large company AI products (this, Microsoft
| copilot, etc) are so much more compelling than any of the
| startups I've seen.
|
| Makes me think that we haven't seen what true innovation looks
| like in this space. Right now, AI is a feature, not a product.
|
| Edit: Not talking about the models themselves (stability, mid
| journey, GPT-x), talking about what is built on those models.
| ghaff wrote:
| In Adobe's case, it absolutely makes sense it would be a
| feature of Photoshop, Illustrator, and Premiere. They've been
| adding various "smart" masking features and the like over time
| already.
| akira2501 wrote:
| > Interesting that the large company AI products (this,
| Microsoft copilot, etc) are so much more compelling than any of
| the startups I've seen.
|
| This is an indicative signal. It's amazing how easily it is
| ignored.
|
| > Makes me think that we haven't seen what true innovation
| looks like in this space.
|
| It's interesting that a bunch of product add-ons could be
| considered "innovation" in the first place.
| MrScruff wrote:
| Isn't almost everything on this page concept stuff though?
| Seems like the only thing they're shipping is text to image and
| the results they show look pretty underwhelming.
|
| Seems like all these big companies are pushing amazing looking
| concept videos more than anything.
| time_to_smile wrote:
| > AI is a feature, not a product
|
| Part of the reason for this is that very often the best
| applications of AI are really invisible to the user, making
| everything a little bit better, quietly behind the scenes.
|
| The same is true for software in general. The best software
| products automate loads of stuff behind the scenes so from the
| user perspective they just click a button and the thing they
| want to happen, happens.
|
| Perhaps the most successful and important AI product is a great
| example of this: the spam filter. I bet most younger email
| users don't even realize how much value this old AI tool
| provides them unseen.
|
| But the trouble is making effective AI products is not "sexy"
| right now, and if your team ships one it won't get any credit
| in a big company. I've had my PM instantly shoot down plenty of
| interesting applications of AI that wouldn't be very visible to
| the customer or larger product-org.
| Dalewyn wrote:
| I don't fundamentally disagree, but if spam filters are "AI"
| then my bloody toaster is also artificially intelligent.
| time_to_smile wrote:
| You must have a smarter toaster than I do. My toaster is a
| completely deterministic mechanism that knows nothing about
| my past toasting experience, nor makes any inference about
| future toast.
|
| A spam filter:
|
| - Is provided historic information about the problem
|
| - Learns from this information to construct a basic model
| of language
|
| - Is then provided unseen information
|
| - Then makes a probabilistic decision based on a degree of
| uncertainty.
|
| The exact model behind the spam filtering can be extremely
| simple (naive Bayes) but could easily (and probably will
| without you even realizing it) expand to include things
| like GPT.
|
| A spam filter is making decisions under uncertainty with
| new information based on patterns it learned from previous
| information. If this doesn't fit your definition of "AI"
| then I think if you understood what was happening under the
| hood I don't think you would consider GPT to be AI either.
|
| If you toaster does learn your toast preference over time
| from your toasting behavior (unlike mine) then I would
| consider that AI as well.
| tensor wrote:
| Microsoft is just licensing their technology from OpenAI, which
| is still a small company, if not still a startup.
| motoxpro wrote:
| ChatGPT is not nearly as good of a product as what was demoed
| by Microsoft if you want to make that comparison.
|
| Not talking about LLMs/underlying model. I don't think adobe
| makes their own either. Talking about the interface to it.
| l33tman wrote:
| I would consider both Stable Diffusion and MidJourney to be
| startups and both are better by far than the established
| companies like OpenAI, and there are dozens of LLMs soon
| catching up with GPT3/4. As you say, it will be a very
| interesting next 6-12 months..
| saiya-jin wrote:
| I played around (till free account expired) with MidJourney
| and what one can produce out of blue is mind-boggling. Indie
| devs can with a bit of effort generate much of their art via
| this for peanuts (the only problem may be consistency from
| what I've seen, basically every image is like from another
| artist, even in same batch)
| GaggiX wrote:
| The biggest problem is that you can't do much if not
| conditioning the generation with your prompt and maybe
| image embeddings, if you are an indie dev you would
| probably find much more useful Stable Diffusion.
| andybak wrote:
| Dall-E is still ahead of the competition for some tasks.
| Midjourney has a very polished look but it lacks the depth of
| understanding that Dall-E can manage. I regularly hit prompts
| that I need to jump into Dall-E for.
| sebzim4500 wrote:
| For playing around, sure. For serious work though you
| probably need control-net in some way, otherwise you end up
| with a bunch of images which are great on their own but
| make no sense together.
| Kye wrote:
| Stable Diffusion is a thing you can run on your computer for
| free and train models for. It's from a research university.
| Is this one of those more expansive definitions of startup?
| sebzim4500 wrote:
| Presumably he means stability.ai is a startup.
| motoxpro wrote:
| I wouldnt consider stable diffusion, mid journey and Open AI
| products. Little bit too low level, they seem to be more
| platforms that the products are built on. Not to say they
| aren't amazing, just that the productized versions (one level
| up) are being executed really well by the big companies.
| aleksandrm wrote:
| A little bit too late to the market for a company of their size.
| Sirikon wrote:
| Yeah, Adobe, release a competitor of your own clients, they'll
| love it.
| sebzim4500 wrote:
| They'll have to buy it (or something like it), they won't have
| much of a choice. Artists that use this kind of tool will be so
| much more productive than ones that don't, that they simply
| won't be able to find employment otherwise.
| msoad wrote:
| Based on my experience with the podcast product they released
| recently I am excited to see what this would look like. I think
| they can execute on UI for working with generative AI much better
| than others.
| sourcecodeplz wrote:
| Wow is all I can say.
| O__________O wrote:
| Possible this is not significant, appears that within the feature
| set is a text-to-vector image generator that produces editable
| vectors art. There's no direct link I was able to find, but
| feature is listed here:
|
| https://firefly.adobe.com/
|
| Is anyone aware of any similar open source or services that
| handle text-to-vector generative AI?
| olejorgenb wrote:
| Very short demo of the feature(?):
| https://youtu.be/c3z9jYtPx-4?t=180
| themodelplumber wrote:
| > services that handle text-to-vector generative AI
|
| I think I used one...maybe Kittl or Illustroke. Not FOSS
| though. In the FOSS world there are some really brilliant tools
| like potrace at the very least. That one is still built into
| Inkscape, I believe.
| elietoubi wrote:
| If anyone wants to do the same thing on Figma, I built a plugin
| just for that. https://www.magicbrushai.com
| wappieslurkz wrote:
| That's awesome! Thanks.
| DizzyDoo wrote:
| How is this 'Firefly Model' trained and sourced? Will it be on
| the contents of the stock.adobe.com library?
|
| Clicking through the available pages it seems like a lot of
| 'coming soon' talk, so there's not really any detail about any of
| the underlying process.
| m_ke wrote:
| I'm sure it's all of their content plus half of the web. The
| proprietary data they get from Behance, Lightroom, Photoshop
| and Illustrator (and soon figma) has to be a great advantage
| for them though.
| mesh wrote:
| Does not include Behance data or user data.
|
| https://helpx.adobe.com/manage-account/using/machine-
| learnin...
|
| "The insights obtained through content analysis will not be
| used to re-create your content or lead to identifying any
| personal information."
| olejorgenb wrote:
| For what kind of model it is, another poster pointed to
| https://news.ycombinator.com/item?id=35089661 (a GAN) as a
| possibility.
| rcarmo wrote:
| all you've asked is in the FAQ, right at the top.
| timdiggerm wrote:
| > The current Firefly generative AI model is trained on a
| dataset of Adobe Stock, along with openly licensed work and
| public domain content where copyright has expired.
|
| https://www.adobe.com/sensei/generative-ai/firefly.html#faqs
| jonplackett wrote:
| Anyone know what underlying tech this is using?
| dopeboy wrote:
| What this reinforces is that unlike with previous big innovations
| (cloud, iphone, etc), incumbents will not sit on their laurels
| with the AI wave. They are aggressively integrating it into their
| products which (1) provides a relatively cheap step function
| upgrade and (2) keeps the barrier high for startups to use AI as
| their wedge.
|
| I attribute the speed at which incumbents are integrating AI into
| their products to a couple things:
|
| * Whereas AI was a hand-wavey marketing term in the past, it's
| now the real deal and provides actual value to the end user.
|
| * The technology and DX with integrating w/products from OpenAPI,
| SD, is good.
|
| * AI and LLMs are capturing a lot of attention right now (as seen
| easily by how often they pop up on HN these days). It's in the
| zeigeist so you get a marketing boost for free.
| adam_arthur wrote:
| Creating AI models has proven to simply be easier than other
| past innovations. Much lower barrier to entry, the knowledge
| seems to be spread pervasively within months of breakthroughs.
|
| People seem to take offense at this idea, but the proof is in
| the pudding. Every week there's a new company with a new model
| coming out. What good did Google's "AI Talent" do for them when
| OpenAI leapfrogged them with only a few hundred people?
|
| It's difficult to achieve high margins when barrier to entry is
| low. These AI companies are going to be moreso deflationary for
| society rather than high margin cash cows as the SaaS wave was
| ftufek wrote:
| It's easier for large rich companies with infrastructure and
| datasets. It's very hard for small startups to build useful
| real world models from scratch, so you see most people
| building on top of SD and APIs, but that limits what you can
| build, for example it's very hard to build realistic photo
| editing on top of stable diffusion.
| unreal37 wrote:
| Someone was able to replicate GPT 3.5 with $500. The
| training of models is getting very cheap.
|
| [1] https://newatlas.com/technology/stanford-alpaca-cheap-
| gpt/
| ftufek wrote:
| I've tried it, sure it's good, but not even close to the
| real thing. But yes it's getting cheaper through better
| hardware, better data and better architectures. Also it
| builds on Facebook's models that were trained for months
| on thousands of A100 GPUs.
| adam_arthur wrote:
| Most of the cutting edge models are coming from companies
| with a few dozen to a few hundred people. Stability AI is
| one example.
|
| Training an AI model, while expensive, is vastly cheaper
| than most large scale products.
|
| This wave will be nothing like the SaaS wave. Hyper
| competitive rather than weakly-competitive/margin
| preserving
| ftufek wrote:
| I wrote it from the perspective of a small startup (<10
| people, bootstrapped or small funding). I think it's far
| cheaper and easier to build a nice competitive mobile
| app/saas than to build a really useful model.
|
| But yes I agree, it will be very competitive with much
| smaller margins.
| sebzim4500 wrote:
| That's not really true though. 4 months on and no one else is
| close to matching the original ChatGPT.
|
| It's too early to say how hard this is, for all we know no
| one but OpenAI will match it before 2024.
| onlyrealcuzzo wrote:
| > * Whereas AI was a hand-wavey marketing term in the past,
| it's now the real deal and provides actual value to the end
| user.
|
| The skeptic in me thinks it's more:
|
| * The market is rewarding companies for doing X (integrating
| AI), so companies are doing X (integrating AI).
|
| Song as old as time.
| hn_throwaway_99 wrote:
| I think you're missing a fundamental reason: adding AI
| functionality into products is simply easier.
|
| That is, these companies are largely _not_ doing the hard part,
| which is creating and training these models in the first place.
| The examples you gave of of cloud and iPhone have both huge
| capital barriers to entry, and in iPhones case other phone
| companies just didn 't have the unique design talent
| combination of Jobs and Ive.
| mesh wrote:
| >That is, these companies are largely not doing the hard
| part, which is creating and training these models in the
| first place.
|
| fyi, for Adobe Firefly, we are training or models. From the
| FAQ:
|
| "What was the training data for Firefly?
|
| Firefly was trained on Adobe Stock images, openly licensed
| content and public domain content, where copyright has
| expired."
|
| https://firefly.adobe.com/faq
|
| (I work for Adobe)
| egypturnash wrote:
| I am definitely glad to see attention being paid to ethical
| sourcing of training images but I am curious: did the
| people who made all those stock images get paid for their
| work being used for training? Did they check a box that
| explicitly said "Adobe can train an AI on my work"? Or is
| there a little clause lurking in the Adobe Stock agreement
| that says this can be done without even a single purchase
| happening?
| unreal37 wrote:
| Nobody owes creators who have been paid fully for their
| work "extra" compensation just because AI is involved.
| Assuming they have been paid, the work belongs to Adobe.
| ttjjtt wrote:
| Defining how "fully" paid a creator has been is the
| entire point of license agreements. It defines the extent
| of how much the rights have been purchased away from
| them.
|
| It merits investigation as to have these creators been
| "fully" paid to the extent that they have no claim to any
| future royalties and can have no objection to their work
| being used as training data.
| webnrrd2k wrote:
| I'm not sure it true that creators are owed nothing
| further... It seems analogous to a musician signing over
| rights for one thing, like recording rights on wax disks,
| records or whatever. Then along comes radio, after the
| artists signed away a smaller set of rights. The radio
| companies claim that they owe the artists nothing. But is
| that true?
|
| And that's a different question from whether or not they
| _deserve_ extra compensation. Is it moral or ethical to
| use their work to directly undercut them via ai
| 'copying' their work?
| justinclift wrote:
| Heh Heh Heh
|
| Half the problems with music is because of record
| companies magically inventing new ways to try and extract
| more money from each other and their supply chain.
|
| "Oh, your band looked at some hookers they passed on the
| way to the recording studio? Well, they obviously owe
| those hookers a cut of the royalties now for
| inspiration..."
|
| Trying to use AI as an excuse to be paid a 2nd time (for
| previously fully paid works) seems like another attempt
| at rent seeking in a similar manner.
| jacobr1 wrote:
| The prior deal was based on royalties for use. Adobe pays
| you 33% percent of anything they make. It is consignment.
| So if someone uses a specific photo for $20, you get paid
| $6.60, no money us paid upfront.
|
| So what should adobe pay you for using the data in
| training? Some kind of fraction of the overall revenue
| they generate from the new product? The license currently
| used for their stock program make it seem like they don't
| have to pay anything at all, because this use cases
| wasn't understood previously. Adobe reserved to rights to
| do it, so legally they can - but if they want to continue
| getting contributions they will need to figure out some
| kind of updated royalty sharing agreement.
| krisoft wrote:
| > Assuming they have been paid
|
| That is the question we are asking, yes. Based on the
| reading of the contributor agreement it sounds like Adobe
| doesn't have to pay a cent to the creators to train
| models on their work.
|
| Does that sound fair to you?
| stale2002 wrote:
| And I am sure that you use your computer for work, to
| make money, and yet you based on the reading of the
| contributor agreement it sounds like [The computer
| buyer/you] doesn't have to pay a cent to the [computer
| creator] for all the money you make using that computer.
|
| Does that sound fair to you?
|
| See how stupid that sounds?
| TheOtherHobbes wrote:
| It sounds stupid because it's a completely different
| thing.
|
| A tool maker does not have a claim on the work made with
| a tool, except by (exceptionally rare) prior agreement.
|
| Creative copyright explicitly _does_ give creators a
| claim on derivative work made using their creative
| output.
|
| That includes patents. If you use a computer protected by
| patents to create new items which specifically ignore
| those patents, see how far that gets you.
|
| I expect you find this inconvenient, but it's how it
| works.
| stale2002 wrote:
| > Creative copyright explicitly does give creators a
| claim on derivative work made using their creative
| output.
|
| No actually, not for this situation. They don't if they
| sold the right to do that, which they did.
|
| > except by (exceptionally rare) prior agreement
|
| Oh ok. So then, if in situation 1, and situation 2, there
| is the same exact prior agreement on the specific topic
| of if you are allowed to make derivative works, then the
| situations are exactly same.
|
| Which is the situation.
|
| So yes, the situations are the same, because of the same
| prior agreement.
|
| Thats why the situation is stupid. The creator sold the
| rights to make derivative works away. Just like if
| someone sold you a computer.
|
| And then people used the computer, and also used the sold
| rights to make derivatives works for the art, because
| both the computer and the right to make derivative works
| were equally sold.
|
| > which specifically ignore those patents
|
| Ok now imagine someone sells the rights to use the patent
| in any way that they want, and then you come along and
| say "Well, can you considered that if the person didn't
| sell the patent, that this would be illegal?"
|
| That wouldn't make any sense to say that.
| unreal37 wrote:
| I don't know if there is a concept in copyright that
| prevents someone from viewing your work.
|
| Like, if you created a lovely piece of art, hung it on
| the outside of your house, and I was walking on the
| sidewalk and viewed it. I would not owe you money and you
| would have no claim of copyright against me.
|
| Copyright covers copying. Not viewing.
|
| So an AI views your art, classifies it, does whatever
| magic it does to turn your art into a matrix of numbers.
| The model doesn't contain a copy of your art.
|
| Of course, a court needs to decide this. But I can't see
| how allowing an AI model to view a picture constitutes
| making an illegal copy.
| dragonwriter wrote:
| > Of course, a court needs to decide this. But I can't
| see how allowing an AI model to view a picture
| constitutes making an illegal copy.
|
| Memory involves making a copy, and copies anywhere
| _except_ in the human brain are within the scope of
| copyright (but may fall into exceptions like Fair Use.)
| flangola7 wrote:
| ChatGPT trained on my GitHub code and I wasn't paid
| anything at all. Is that preferable?
| ClumsyPilot wrote:
| "Since someone screwed me, they should screw you"? Crab
| bucket mentality?
| jacobr1 wrote:
| It might also be wrong, yes. I have plenty of code
| licensed under very permissive licenses that still
| requires attribution. It is an open question of how much
| the AI system is a "derived" work in a specific,
| technical sense. And it probably will remain hard, since
| the answer is probably on a continuum.
| krisoft wrote:
| > Is that preferable?
|
| I don't see why you are asking this. Which part of my
| comment made you think it is preferable?
| TheOtherHobbes wrote:
| Some very strange responses in this sub-thread.
|
| When the agreement was signed no one was even able to
| imagine their work being used for AI. As far as they knew
| they were signing a standard distribution agreement with
| one particular rights outlet, while reserving all other
| rights for more general use. If anyone had asked about
| automated use in AI it's very likely the answer would
| have been a clear "No."
|
| It's predatory and very possibly unlawful to assume the
| original agreement wording grants that right
| automatically.
|
| The existence of contract wording does not automatically
| imply the validity of that wording. Contracts can always
| be ruled excessive, predatory, and unlawful no matter
| what they say or who signed them.
| krisoft wrote:
| > If anyone had asked about automated use in AI it's very
| likely the answer would have been a clear "No."
|
| Maybe. Maybe not. Very clearly there is a price point
| where it could be worth it for the artist. Like if adobe
| paid more for the rights than they recon they will ever
| earn in a lifetime or something. But clearly everybody
| would have said "no" at the great price point of 0
| dollars.
| irrational wrote:
| If they signed the contract, then yes.
| yoden wrote:
| The creators of these images assigned the rights to
| adobe, including allowing Adobe to develop future
| products using the images. So yes, this is perfectly
| fair.
|
| It's completely different than many (most?) other
| companies, which are training on data they don't have the
| right to re-distribute.
| krisoft wrote:
| > So yes, this is perfectly fair.
|
| I think you are making a jump here. I'm not a lawyer, but
| your first sentence seems to be about why it is legal.
| And then you conclude that that is why it is also fair.
| I'm with you on the first one, but not sure on the
| second.
|
| The creators uploaded their images so adobe can sell
| licences for them and they get a share of the licence
| fees. Just a year ago if you asked almost any people what
| "using the images to develop new products and services"
| mean they would have told you something like these
| examples: Adobe can use the images in internal mockups if
| they are developing a new ipad app to sell the licences,
| or perhaps a new website where you can order a t-shirt
| print of them.
|
| The real test of fairness I think is to imagine what
| would have happened if Adobe ring the doorbell of any of
| the creators and asked them if they can use their images
| to copy their unique style to generate new images.
| Probably most creators would have agreed on a price.
| Maybe a few thousand dollars? Maybe a few million? Do you
| think many would have agreed to do it for zero dollars?
| If not, then how could that be fair?
| krisoft wrote:
| I'm not a lawyer and I don't work for Adobe. :)
|
| The contributor agreement linked from here[1] is this:
| [2]
|
| "You grant us a non-exclusive, worldwide, perpetual,
| fully-paid, and royalty-free license to use, reproduce,
| publicly display, publicly perform, distribute, index,
| translate, and modify the Work for the purposes of
| operating the Website; presenting, distributing,
| marketing, promoting, and licensing the Work to users;
| developing new features and services; archiving the Work;
| and protecting the Work. "
|
| I guess this would fall under the "developing new
| features and services".
|
| What is funny is that "we may compensate you at our
| discretion as described in section 5 (Payment) below". :)
| I like when I may be compensated :)
|
| And in section 5 they say: "We will pay you as described
| in the pricing and payment details at [...] for any sales
| of licenses to Work, less any cancellations, returns, and
| refunds."
|
| So yeah. Sucks to the artist who signed this. They can
| use your work to develop new features and services, and
| they do not have to pay you for that at all, since it is
| not a sale of a license.
|
| 1: https://helpx.adobe.com/stock/contributor/help/submiss
| ion-gu...
|
| 2: https://wwwimages2.adobe.com/content/dam/cc/en/legal/s
| ervice...
| pradn wrote:
| The terms seem like legalese for "you pay me money now
| and get to do anything with it". It doesn't seem far-
| fetched for training AI models to be a valid use-case.
| This is way better than scraping the whole internet, for
| art by artists who have had no commercial arrangement
| with Adobe.
| wwweston wrote:
| I'm starting to think that use of works in a training set
| is a category not covered well by existing copyright law,
| and it may be important to require separate explicit opt-
| in agreement by law (and receipt of some consideration in
| return) in order to be considered legitimate use.
|
| The vast majority of copyrighted works were conceived and
| negotiated under conditions where ML reproduction
| capabilities didn't exist and nobody knew what related
| value they were negotiating away or buying.
| musicale wrote:
| Derivative works or remixes usually require a license.
| Artists could very reasonably argue that AI-generated
| images are derivative works from their own images -
| especially if there is notable similarity or portions
| appear to be copied. They could also point out that their
| images were used for commercial purposes without
| permission and without compensation to generate works
| that compete with their own.
|
| For example, even a short sample used in a song usually
| has to be licensed. Cover versions of songs may qualify
| for a compulsory license with a set royalty payment
| scale.
|
| However some reuse (such as transformative use, parodies,
| or use of snippets for various purposes, especially non-
| commercial purposes) may be considered fair use. AI
| companies could very reasonably argue that use of images
| for training AI models is transformative and qualifies as
| fair use, that no components of the original images are
| reused in AI-generated images, and that AI-generated
| images are no more infringing than human-generated images
| which show influences from other artists.
|
| Absent additional law, I expect the legal system will
| have to sort out whether AI-generated images infringe the
| copyright of their training images, and if so what sort
| of licensing would be appropriate for AI-generated (or
| other software/machine-generated) images based on
| training data from images that are under copyright.
| AJ007 wrote:
| I propose that it is impossible to prove that any content
| created after 2022 did or did not utilize ML/AI during
| the process of its creation (art, code, music, audio,
| text.) Thus, anything produced after 2022 should not be
| eligible for copyright protection. Everything pre-2022
| may retain the existing copyright protection but should
| be subject to extra taxes on royalties and fees given the
| exorbitant privilege.
|
| Though this sounds extreme, enforcing the alternative
| would break any last remnant of human privacy. It would
| kill the independent operation of computing as we know it
| and severely cripple AI/ML research when we need it most:
| human alignment.
|
| It is possible that a catastrophic event occurs and halts
| the supply chain of advanced semiconductors in the near
| future, in which case the debate can be postponed
| indefinitely.
| novok wrote:
| If this kind of abstract copyright regime of 'I had the
| idea first, and anyone who uses a derivative of my idea
| must pay me money!' is a very sillpery slope of anyone
| who makes art or music of any genre needing to pay a
| royalty to sony/disney, because that is where these
| 'flavor copyrights' will end up going. The right kind of
| ambitious amoral lawyer in a common law regime will
| leverage an AI royalty law into a generic style copyright
| law because that is what will be needed to write this law
| properly.
|
| And on top of that, it will become a spotify where each
| creator gets a sum total of $0.00000000001 per AI their
| media item was trained on and maybe a few dollars a
| month, while paying a greater tax to apple-sony-disney
| whenever their AI style recognizers charge you a royalty
| bill for using whatever bullshit styles it notices in
| your media items.
|
| Copyright should stay in it's 'exact duplication' box,
| lest we release an even worse intellectual property
| monster on the world.
| roughly wrote:
| > They can use your work to develop new features and
| services, and they do not have to pay you for that at
| all, since it is not a sale of a license.
|
| And in this case, to develop new features and services
| that specifically undercuts your existing business, viz.
| selling stock photos for money. Sucks to the artists,
| indeed.
| [deleted]
| anigbrowl wrote:
| I'm sure that's a no. When you license a stock image you
| license it for any use whatsoever. You don't get to
| complain if it becomes the background to a porn movie or
| an advert for a product or person you despise. Songs can
| licensed on a case-by-case basis but images are so
| plentiful as to be a commodity.
| wahnfrieden wrote:
| Simply untrue, legally and socially
| ghaff wrote:
| Not quite. For example, this is one thing Adobe says in
| their FAQ: Images showing models can't be used in a
| manner that the models could perceive as offensive. (For
| example, avoid using images with models on the cover of a
| steamy romance novel or a book about politics or
| religion, etc.)
|
| There are also a few other more Adobe-specific
| restrictions.
| mesh wrote:
| We are working on a compensation model for Stock
| contributors for the training content, and will have more
| details by the time we release.
|
| The training is based on the licensing agreement for
| Adobe contributors for Adobe Stock.
|
| (I work for Adobe)
| roughly wrote:
| I would be very, very interested to see a compensation
| system that took into account the outputs of the trained
| model - as in, weights derived from your work are
| attributable to X% of the output of this system, and
| therefore you are due Y% of the revenue generated by it.
| It sounds like Adobe is taking seriously the question of
| artist compensation, and I'd love to see someone tackle
| the "Hard Problem" of actual attribution in these types
| of systems.
| astrange wrote:
| That is impossible. You might be able to do it if you
| invented a completely different method of image
| generation, but the amount of original images present in
| a diffusion model is 0% with reasonable training
| precautions, and attributing its weights to particularly
| any of its input is nearly arbitrary.
|
| (Also, it's entirely possible that eg a model could
| generate images resembling your work without "seeing" any
| of your work and only reading a museum website describing
| some of it. Resemblance is in the eye of the beholder.)
| brookst wrote:
| I've looked a few times, but have not seen any research
| on assigning provenance to the weights used in a
| particular inference run. It's a super interesting space
| for a bunch of reasons.
|
| But the naive approach of having a table of how much each
| individual training item influenced every weight in the
| model seems impossibly big. For DALL-E 2's 6.5B
| parameters and 650m training items, that's 4.2
| quadrillion associations. And then you have to figure out
| which weights contributed the most to an output.
|
| I would love to see any research or even just thinking
| that anyone's done on this topic. It seems like it will
| be important in the future, but it also seems like a
| crazy difficult scale problem as models get bigger.
| bash-j wrote:
| Could you not use tags used to label the image? If your
| image contains more niche tags that match the user input,
| your revenue share will be higher. Depending how much
| extra people earn for certain tags, it might incentivise
| people to upload more images of what is missing from the
| training data.
| brookst wrote:
| That's interesting, but I'm not sure it works. I think
| that works out to "for any given prompt, distribute
| credit to every source image that has a keyword that
| appears in the prompt, proportional to how many other
| source images had that same keyword".
|
| If I include the tag "floor", do I get some (tiny)
| percentage of every image that uses "floor" in the
| prompt, even if the bits from my image did not end up
| affecting model weights much at all in training?
|
| Worse, for tags like "dramatic lighting", it's likely
| that the important source images will depend on the other
| words in the prompt; "sunset, dramatic lighting" will
| probably not use the rely on the same weights or source
| images as "theater interior, dramatic lighting".
|
| And then you get the perverse incentives to tag every
| image with every possible tag :)
|
| I'd love to be convinced otherwise, but I'm not seeing
| prompt-to-tag association working.
| bash-j wrote:
| The tags could be added by a model rather than the user
| submitting the image. Maybe do both and verify the tags
| with a model? Users could get a rating based on how
| reliably they tag their pictures and are trusted to add
| more niche tags at higher ratings. You could even help
| tag other pictures to improve your rating.
| gradys wrote:
| https://arxiv.org/abs/1703.04730
|
| > How can we explain the predictions of a black-box
| model? In this paper, we use influence functions -- a
| classic technique from robust statistics -- to trace a
| model's prediction through the learning algorithm and
| back to its training data, thereby identifying training
| points most responsible for a given prediction.
| brookst wrote:
| Oh thank you! Will go read and digest.
| jamilton wrote:
| I think the naive approach of just dividing revenue
| equally across all contributors could be acceptable, and
| would have lower overhead costs.
| egypturnash wrote:
| Thanks! I am delighted to know that Adobe's got plans on
| that front.
| hn_throwaway_99 wrote:
| Thanks for correcting my bad assumption, appreciated.
| jokethrowaway wrote:
| Sure but are you standing on the shoulders of Stable
| Diffusion or not?
|
| Fine tuning Stable Diffusion with your own images is way
| easier than creating Stable Diffusion in the first place.
|
| If you're creating your own I stand corrected and that's
| some serious investment.
| irrational wrote:
| Where did you read that they were using Stable Diffusion?
| astrange wrote:
| Stable Diffusion 1.x isn't original work either; it uses
| OpenAI CLIP.
|
| But training your own is pretty doable if you have the
| budget and enough image/text pairs. Most people don't
| have the budget, but at least Midjourney and Google have
| their own models.
| leet wrote:
| This is not based on just fine tuning Stable Diffusion.
| samstave wrote:
| Ha, I didnt even need to read the article to assume this!
|
| I instantly thought of how bitchin' their library of images
| must be.
|
| Can you tell us how many images/size of set?
| Vt71fcAqt7 wrote:
| Adobe in particular, however, has been more twords the
| forefront of AI research. I'm pretty sure they aren't just
| using SD here. They might not even be using transformers at
| all. See https://news.ycombinator.com/item?id=35089661
| capableweb wrote:
| They also have the resources to build a huge training set,
| together with people who willingly upload their art and
| photos to them, which they can use to make the training set
| better than publicly available data.
| mesh wrote:
| Just to be really clear what we do and do not train on:
|
| Firefly was trained on Adobe Stock images, openly
| licensed content and public domain content, where
| copyright has expired.
|
| https://firefly.adobe.com/faq
|
| We do not train on user content. More info here:
|
| https://helpx.adobe.com/manage-account/using/machine-
| learnin...
| lelandfe wrote:
| One step further, they already have a huge training set.
| Stock libraries have the luxury of the hard part already
| being done: labeling. As of today, that's >313M labeled
| images they can use with no fear of legal woes: https://s
| tock.adobe.com/search/images?filters%5Bcontent_type...
|
| Stable Diffusion was trained on _billions_ of images, of
| course. But having explored some of LAION-2B, it 's clear
| that Adobe Stock has far better source images and labels.
| boplicity wrote:
| They also know most of their business customers already
| have GPUs, and often have high-end GPUs, so they're able to
| tailor solutions to the hardware their customers already
| have. For example, the speech to text feature in Adobe
| Premeire runs on local hardware, and is actually pretty
| good.
|
| Hopefully they'll continue to push the potential for
| locally run models.
| joe_the_user wrote:
| _That is, these companies are largely not doing the hard
| part, which is creating and training these models in the
| first place._
|
| No, there is no real "hard part" to AI current. Training is
| simply "the expensive part".
|
| It seems "the bitter lesson" has gone from reflection to
| paradigm[1] and with that as paradigm, the primary barrier to
| entering AI is cash for cpu cycles, other things matter but
| recipes is relatively simple and relatively available.
|
| [1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
| zwaps wrote:
| I get your point, but please do read the training logs for
| Meta's OPT, it's some Greek drama I tell ya
| exac wrote:
| I don't think they missed it, their second point is that the
| DX (developer experience) is good.
| usrusr wrote:
| > That is, these companies are largely not doing the hard
| part,
|
| Are they the hard parts though? The short time it took from
| the first waves of public excitment around DALL-E to stable
| diffusion being the well established baseline looks more like
| the class quantity of problems that can be reliably solved by
| shoving enough resources at it. What I consider hard problems
| are those full of non-obvious dead ends are where no matter
| how much you invest, you might just dig yourself in deeper.
| brookst wrote:
| The hard part is building a product customers want,
| delivering it at scale, and iterating on product value and
| revenue.
|
| The rest is just technology.
| usrusr wrote:
| And that had been true enough even before the "quantity
| matters!" of ML entered the stage.
| quitit wrote:
| Also missed: All these big tech companies were already
| invested in AI and using it in their products: it just
| happens that the latest batch of AI tools are far more
| impressive than their internal efforts.
| samstave wrote:
| I think you are _both correct_
|
| But this phrase from GP is pretty darn salient:
|
| >>" _They are aggressively integrating it into their products
| which (1) provides a relatively cheap step function upgrade
| and (2) keeps the barrier high for startups to use AI as
| their wedge._ "
| bredren wrote:
| I attribute it more to open source and free plugins into
| existing Adobe products like Photoshop.
|
| People are already using these plugins to do inpainting etc
| with Stable Diffusion. Adobe is trying to provide official
| support simply to keep up.
|
| To me, the most novel thing is the data source being free of
| licensing concerns.
|
| But that, too, will be eroded as more models appear based on
| datasets with straightforward licensing for derived works.
|
| Image stock collections (and prior deals around them) seem more
| valuable now than they did before all this.
| balls187 wrote:
| It's important to note that this is _generative_ AI.
|
| As pretty much everyone on HN is aware, AI is a broad term for
| a variety of technologies.
|
| AI has been in our everyday lives for quite some time, but not
| in a way that generated (no pun intended) such buzz.
|
| Having my iphone scan emails and pre-populate my calendar with
| invite suggestions is far less newsworthy as the ability to
| generate a script for an Avengers film where all the members
| are as in articulate as the Incredible Hulk.
|
| If anything, with generative AI being so buzzy, this latest
| round of AI integration is more marketing.
| cmorgan31 wrote:
| I can't think of a company more suited to take advantage of
| the generative AI hype. If firefly is built into the adobe
| stack you'll have a rather elegant composition and refinement
| toolkit to modify anything you dislike about the generative
| output.
| DantesKite wrote:
| > It's in the zeigeist so you get a marketing boost for free.
|
| I never thought about it that way, but now that you mention it,
| it makes so much sense.
| luke_cq wrote:
| I think what we're going to see is that all the small startups
| going for big, broad ideas ("we do AI writing for anything",
| "your one-stop marketing content automation platform", etc) are
| going to flat out lose to the big companies. I predict that the
| startups we'll see survive are the ones that find tight niches
| and solve for prompt engineering and UX around a very specific
| problem.
| wouldbecouldbe wrote:
| Except for Chat GPT I havent yet seen super impressive
| implementations. Dall-e, codepilots, text to speech etc. are
| still all not good enough to use for more then playing around.
|
| However this landingpage looks amazing.
|
| Any good other tips?
| danielbln wrote:
| Midjourney v5 might be ready for prime time, in most cases it
| seems to have solved faces, hands, etc. The difference
| between version 1 from exactly one year ago and v5 now is
| rather striking: https://substackcdn.com/image/fetch/w_1456,c
| _limit,f_webp,q_...
| ShamelessC wrote:
| This is effectively DALLE-2 (apparently retrained on
| different data) with a native frontend designed by
| experienced designers.
|
| You don't see the value in effectively not needing any
| art/design skills to make aesthetically pleasing
| logos/mockups/memes? Even just for "playing around" - I bet
| there's a large market of people who want to add "fun edits"
| to their existing personal photo library.
|
| Focus less on landing pages and more on the implications a
| technology brings. Then you'll see where things are headed.
| wouldbecouldbe wrote:
| I've tried to use DALL-E for practical purposes such as
| generating icons, art etc. for commercial products but most
| of it is unusable.
| criddell wrote:
| > I attribute the speed at which incumbents are integrating AI
| into their products to a couple things
|
| And also it's something many companies have been working on for
| a big part of the last decade. Kevin Kelly in particular has
| been talking about it for at least the past 7 years. In 2016 he
| released a book titled "The Inevitable: Understanding the 12
| Technological Forces That Will Shape Our Future" and the
| addition of AI to _everything_ is covered in that book.
| wongarsu wrote:
| Another point is that many of the incumbents have seen the
| trend for far longer than the general public, and had time to
| gather inhouse talent. For example this isn't Adobe's first
| stint into generative AI, back in 2016 they announced (and
| quietly dropped after the backlash) Adobe VoCo, "photoshop for
| voice".
| Pxtl wrote:
| Right? Adobe was first to market with properly integrated AI-
| based photo editing features with stuff like Content Aware
| Fill back in 2015 iirc.
| dopeboy wrote:
| This is a great point. It appears the success of OpenAPI has
| validated their approach, specifically around (1) using the
| web as a training set and (2) using transformers.
|
| I imagine a lot conversations with in-house AI folks is
| around deploying these methods.
| MarcoZavala wrote:
| [dead]
| nvr219 wrote:
| > * Whereas AI was a hand-wavey marketing term in the past,
| it's now the real deal and provides actual value to the end
| user.
|
| Ehhh.... Sometimes. It's still a hand-wavey marketing term
| today. Almost every sales call I'm in either the prospect is
| asking about AI, or more likely the vendor is saying something
| like "We use our AI module to help you [do some function not at
| all related to AI]". Also, even when it's "real" AI (in the
| sense that we're talking about here), it's not always providing
| actual value to the end user. Sometimes it is, and sometimes it
| isn't.
| aardvarkr wrote:
| Like a toothbrush with "AI" that tells you when you've
| brushed enough
| jstummbillig wrote:
| Yes, not everything AI is working out - never has, and never
| will. The same is true in any field. And yes, there will be a
| display of incompetence, delusion and outright fraud. Again,
| in any field, always.
|
| However, with AI in general, we have very decidedly passed
| the point where it Works (image generation probably being the
| most obvious example of it)
|
| Even if, starting now, the underlying technology did not
| improve in the slightest, while adoption rises as it is going
| to with any new technology that provides value, anyone who
| does not adopt is going to be increasingly uncompetitive. It
| quite simply is too good already, to not to be used to
| challenge what a lot of average humans are paid to do in
| these fields.
| Hard_Space wrote:
| I wish these schemes would stop using Discord. It's just a cheap
| grab at building a community where one might not gather
| naturally, and the generative grunt that goes into public Discord
| prompts would be just the same in a logged-in API such as
| ChatGPT.
| jonifico wrote:
| IMHO,it was only a matter of time before the big names started to
| include AI generations on their software. I'm guessing if most of
| the AI design tools that are being launched everyday and which
| relies mostly in consuming a public api could be easily absorbed?
| daveslash wrote:
| People have been talking about things like ChatGPT Doing code,
| but I just realized... something like ChatGPT could be
| _incorporated_ right into your IDE.
|
| Think Clippy, for Code... " _I see you 're trying to write a
| recursive function to compute finocacci. Would you like help?_"
|
| Or Code Reviews / Static-Code-Analysis: "Hey Visual Studio.
| I've written a RESTful API for my application. What do you
| think of the approach, architecture, and adherence to best
| practices? What can I do to improve it?"
|
| Or instead of scratching your head going _" why doesn't this
| work?"_, just typing that question directly into your IDE....
| rafram wrote:
| That's just Copilot.
| clpm4j wrote:
| This is kind of like Copilot inside of VSCode right now.
| rubyron wrote:
| Clippai(tm)
| Nevermark wrote:
| I was just going to downvote you, but really, this deserves
| some very public shaming! Ugh!!! /h
|
| But it does appear that we are all doomed to spend our
| lives with assigned clippai's
|
| Dystopia. Dystopai?
| johndhi wrote:
| this is very cool.
___________________________________________________________________
(page generated 2023-03-21 23:00 UTC)