[HN Gopher] Adobe will charge "credits" for generative AI
___________________________________________________________________
Adobe will charge "credits" for generative AI
Author : tambourine_man
Score : 53 points
Date : 2023-09-16 21:28 UTC (1 hours ago)
(HTM) web link (helpx.adobe.com)
(TXT) w3m dump (helpx.adobe.com)
| tomschwiha wrote:
| It doesn't read too bad: Creative Cloud and Adobe Stock paid
| users can keep taking generative AI actions, but the use of
| generative AI features may be slower.
| NikolaNovak wrote:
| Is adobe trained on their own library of licensed images, as
| opposed to scraping whole internets?
|
| If so, even as a private individual just fooling around, I'll
| start using it from both legal and ethical perspective as long as
| it's reasonably equivalent to other models. And this from a
| person who's been fairly vocal against adobe's cloud subscription
| model ;-<. I can only imagine for anybody with a commercial need
| it would be an immediate no brain er - they'll have an
| established relationship, account and billing, they'll perceive
| it as integrating in their work flow, and it'll just become
| another part of the pipeline.
| greensoap wrote:
| It is trained on Adobe Stock and supplemented with openly
| licensed and public domain content.
|
| https://blog.adobe.com/en/publish/2023/03/21/responsible-inn...
| codetrotter wrote:
| > ethical perspective
|
| If a photographer licenses a photo to Adobe Stock, they get
| paid every time someone pays to use the photo, right?
|
| But if Adobe trained their AI on photos you had licensed to
| Adobe Stock. Do you get compensated at all?
|
| If not, it's not really different from what everyone else was
| doing in terms of ethics.
| kristopolous wrote:
| Btw, there's already an open source way to do this
|
| https://github.com/AbdullahAlfaraj/Auto-Photoshop-StableDiff...
| xnx wrote:
| Nice. It looks like there are extensions for Photopea
| (https://github.com/yankooliveira/sd-webui-photopea-embed) and
| Gimp (https://github.com/blueturtleai/gimp-stable-diffusion)
| too.
| asimpleusecase wrote:
| I'll not pay for it, I've tried the tool in beta and it's nothing
| to write home about. And if they decide to get cute about other
| creative pricing of existing features - I hope people will move
| on.
| samwillis wrote:
| I'm convinced this will be a short lived business revenue
| structure - paying per use of generative AI in the cloud.
|
| I'm sure that in the not too distant future (a few years at most)
| we will be happily running these on customer level hardware.
|
| I do wander if companies working to develop these type of revenue
| models truly think it's a long term structure?
| coder543 wrote:
| I would have said the same thing about CAD simulations, but
| then Autodesk decided to move backwards and remove an existing
| feature just so they could charge for per-simulation credits:
| https://hackaday.com/2022/08/12/local-simulation-feature-to-...
|
| Whether Adobe ever decides to let their model run locally or
| lock it forever into the cloud is a choice they will have to
| make. A lot of people trust Adobe products, so it's entirely
| conceivable that some people will always choose to pay for a
| pay-per-use generative solution from Adobe rather than try to
| run competing solutions locally. The question is probably
| whether it generates more revenue than negativity for Adobe. If
| _most_ Adobe users are running their own models locally and
| avoiding the feature, then I think Adobe will be more likely to
| follow suit and move away from the pay-per-use cloud approach.
| flangola7 wrote:
| Long term they may offer both, with the local model consuming
| fewer credits per request. The new GPUs all have secure
| enclave remote attestation architecture, so Adobe would be
| able to offer this without the risk of someone jailbreaking
| the model and running it for free.
| artursapek wrote:
| Isn't this the revenue model for most cloud computing, like EC2
| (compute) S3 (storage) etc?
| heavyset_go wrote:
| > _I 'm sure that in the not too distant future (a few years at
| most) we will be happily running these on customer level
| hardware._
|
| The models themselves will be hoarded as IP. Doesn't matter if
| they're in the cloud or on devices, they'll be licensed like
| commercial proprietary software with the same restrictions
| commercial software has.
| whiddershins wrote:
| Yeah but that isn't t pay - per - operation
|
| I mean I guess my electricity provider gets paid per compute.
| heavyset_go wrote:
| You can write software that is pay per use, and you can do
| the same with local models.
| api wrote:
| This will be true as long as they are crazy expensive to
| produce. Eventually the cost of training a base model could
| drop to the range where crowdfunding could do it.
|
| Or alternately someone could make a major advance in
| distributed training and we could all contribute cycles in a
| distributed effort like Folding@Home. As it stands training
| requires far too much bandwidth for synchronization and
| moving model data around. Some approach to sharding training
| would have to be discovered. It's an open problem area.
|
| Neural networks are very parallelizable and training is
| stochastic so my intuition is that it should be possible.
| Even if it were less efficient than synchronous training you
| could make up for that by harnessing 100X the compute from a
| huge crowd.
| heavyset_go wrote:
| Platforms and rightsholders know what they're sitting on
| now, and I worry that the datasets required to train
| sufficient models in the future will also be hoarded as IP.
|
| It's one thing to train on Common Crawl in 2023, but what
| about when you have to shell out millions of dollars just
| for access to data sets to train on in the future? Same
| thing with human reinforcement. The customers for both are
| willing to pay much more than a crowdfunding campaign
| would.
|
| Training is expensive now, but data sets can be expensive
| in the future.
| krasin wrote:
| Collecting training data is actually a perfect for
| crowdsourcing. Images/videos are easier than text, and
| text is easier than high-quality text, but all are
| doable.
| yieldcrv wrote:
| I agree, these models are going to get smaller and more
| performant, the software to leverage your hardware is going to
| have major improvements, the hardware is going to change to
| prioritize processing these with more memory to specific
| coprocessors, and the OS is going to start having these models
| baked into them, with improved default models being a core
| feature of upgrading each OS version
|
| people are looking at an extremely limited view of "bigger
| models on better hardware will always be in the cloud" when
| that reality simply won't matter for most use cases
| YurgenJurgensen wrote:
| Wouldn't this require a complete reversal of course of, like,
| the last fifteen years of computing? Basically everyone wants
| to get out of selling products and into selling services.
| api wrote:
| I run stable diffusion XL with control nets on a laptop, albeit
| a high end one. It's pretty decent.
|
| I also run LLMs such as trains of llama2, though LLMs on
| commodity hardware are not as "there" yet as image generators.
| It's a decent question and answer bot and summarizer but isn't
| GPT-4 level. I could see another iteration approaching that but
| I'd probably need more RAM.
| thorum wrote:
| When we reach the point when average people can easily run
| today's models on their own devices, today's models will no
| longer be SOTA and there will still be demand to run better
| models in the cloud.
| brucethemoose2 wrote:
| Image generation parameter count is hitting diminishing
| returns, going by what we've seen with SD/SDXL. The tooling
| around them is far more important.
|
| However, Stable diffusion already _can_ run on mobile
| devices. There is already a good iOS app for it (and the dev
| is here on HN) but the problem seems to be that no one cares.
| There are 700,000 cloud imagegen apps crowding it out,
| because thats what 's easier and more profitable to spam
| across the store and web.
| thorum wrote:
| > Image generation parameter count is hitting diminishing
| returns, going by what we've seen with SD/SDXL.
|
| For image quality, sure - language understanding is still
| an issue. SDXL can generate a beautiful image, but if it
| doesn't show _exactly_ what you asked for in the prompt, on
| the first try, there is still room for improvement. The gap
| between LLMs and image generators in this regard is huge.
| Closi wrote:
| Depends if the limit on parameter count is a 'real' limit,
| or just a limit based on what current-technology models can
| effectively use.
|
| Back in the 'Google Daydream' days, Google might have found
| that they didn't get any more image-generation performance
| by raising the parameter count - but that's just because
| the technology at the time couldn't effectively utilise
| more parameters. It's impossible to know what next-gen
| models might be able to use, but I suspect we will find
| ways to allow the models to take advantage of even higher
| parameter counts.
|
| Stable diffusion can run on mobile devices, but it's
| painful and image generation takes a fraction of the time
| via cloud services.
| GaggiX wrote:
| We already run these on customer level hardware, you need a
| good PC but still, in the future the pool of people that can do
| is just going to be bigger.
| Closi wrote:
| I'm unconvinced that local deployment will be the preferred way
| to run generative models anytime soon.
|
| They seem like pretty much the perfect fit for cloud - burst
| compute which would result in very low hardware utilisation if
| ran locally.
|
| Why would it be better to have a $1,500 GPU that is weak and
| used infrequently, when you could share a big cluster of better
| GPUs shared between a big group of people, and have it more
| heavily utilised?
|
| There is a philosophical argument about owning your own
| hardware etc, however I think the economics and performance
| will eventually push this to the cloud for most use-cases (most
| people will just get better bang-for-buck in the cloud).
| baz00 wrote:
| You can run this on an M2 Mac Studio. That'll be consumer
| level hardware in a few years.
| wayfinder wrote:
| I have an M1 but I first remember thinking it was very
| impressive that my Mac could run these AI models...
|
| ...until I tried the same on my RTX 4070 and it made my Mac
| look like a joke.
|
| For the 30 seconds my Mac would have taken for 1 result,
| which will probably need revising, the RTX would give me 30
| results.
|
| However the RTX was half the cost of my Mac, so it's not a
| good investment if I just want to generate some images. I'd
| rather pay for the cloud if I didn't have the RTX already.
| Closi wrote:
| Run what? Adobe's offering is cloud-only, and I don't think
| the hardware requirements are disclosed.
| bhouston wrote:
| > I'm sure that in the not too distant future (a few years at
| most) we will be happily running these on customer level
| hardware.
|
| I doubt that is what Adobe will do. This is a new revenue
| stream for them, why would they remove it?
|
| Gimp will use local generation but Adobe is using a proprietary
| dataset that they can keep secure in the cloud.
|
| So yeah this is going to be sticking around.
| financypants wrote:
| Sure we will be happily running these on customer level
| hardware, but there will always be a stronger version in the
| cloud "worth paying for," won't there?
| giancarlostoro wrote:
| > we will be happily running these on customer level hardware.
|
| This is the stage of AI that will impress me most. If I can use
| your AI completely offline on my device on a spaceship orbiting
| Pluto, then I will say we have achieved an AI capacity that is
| impressive, even if its got the quirks of chatgpt today.
| brianwawok wrote:
| We can already do that. But why give away the keys to a money
| machine?
| gochi wrote:
| They can upgrade hardware to bleeding edge faster than
| consumers can get their hands on remotely comparable products.
| Most consumers also don't upgrade every year to stay on
| bleeding edge, they just tolerate what they can currently do.
| While this will be fine for most, those actually using
| generative AI for work aren't likely going to tolerate
| stagnation as easily.
| janehdoch wrote:
| I think that we still need a few years 3-5 alone for better
| models, different architectures and more finetuning.
|
| Those models will affect us more than today already and change
| how we perceive AI.
|
| Than we will start to see AI optimized hardware (much more
| optimized).
|
| And than perhaps in 10 years we all run a lot more models
| locally.
|
| Nonetheless or despite this, the normal consumer doesn't run
| open models and will probably not do that for a very long time.
| Searching, keeping up-to-date and running models is still
| effort and the usage model makes a ton of sense. Escpecially in
| time of SaaS.
|
| Im not running wikipedia locally. And none of my social circle
| operates infrastructure / server.
|
| People just want to use it.
|
| Besides that, whatever local models or open models will be able
| to do, AIaaS will have faster models, better models and more
| convinient models.
|
| I'm just waiting to pay for google assistent if it becomes
| smart and can manage my emails my calendar and everything else.
| After all my gmail account already has access (through email
| and password reset) to most services i use.
|
| I'm more curiuos when we will see AI service integration
| through much more system to system communication. Machine
| friendly apis (which partially already exist anyway)
|
| PS: Look at how fast hardware development currently is. Not
| much change in Memory etc. Models will not just become 100x
| smaller in just a few years. We are right now at optimizing
| those models to be cost efficient. Alone this phase will take a
| few years.
| amelius wrote:
| > on customer level hardware
|
| Unless nVidia changes their monetization model, and for example
| introduces an App Store for AI, with subscriptions, of course
| on locked down hardware.
| code51 wrote:
| Adobe is planning for a post-regulation, post-biggest-lawsuit
| world for sure. All their steps show that they'll base their
| offering on their own commercial data - paying training license
| fees to stock images and artists.
|
| Whether the amounts they pay would make licensing your work
| sensible or not, Adobe is surely assuming this will ultimately
| end up as Napster-to-Spotify transition.
|
| If we end up "happily" (means legally as well) running these on
| customer level hardware, then the question won't be about
| credits of computation. It'll be about credits to use licensed
| work.
| sebmellen wrote:
| This is the most plausible reason from what I see.
| whiddershins wrote:
| That's a big bet. My instinct is it doesn't go that way. We
| will see.
| GaggiX wrote:
| > It'll be about credits to use licensed work.
|
| If this is true (which I kinda doubt), is it going to matter
| to most people? Like you can't really tell the images used to
| train a model from the images it generates (if it's trained
| correctly), so I doubt the majority of people would care,
| like those who already use MJ for example. Training models on
| copyrighted data for academic research will be allowed, the
| models will be published, and good luck enforcing the
| licence; and here I'm talking about the worst case scenario
| where a court would find an image generated by AI to be
| derivative of another image in a pool of billions in a
| dataset (this goes way beyond any definition of derivative
| work for now).
| tikkun wrote:
| I suspect many companies will do something like this - prepaid
| credits or tokens for AI features that have high inference costs.
| Inference costs per user are high, at least much higher than most
| traditional software costs. This way, the costs are aligned with
| the usage.
|
| We'll also see occasional subscription products, but only when it
| can be done in a way that is comfortably gross margin profitable
| for most users of a company. (Eg ChatGPT Plus, Claude Pro,
| Midjourney, 365 Copilot)
|
| This will only change when the cost of inference goes down by a
| lot.
| xwdv wrote:
| Hmm, what if they expand this to charge for small credits for
| various other tools? Filters, brushes, etc.
| Etheryte wrote:
| Please stop giving Adobe ideas.
| pphysch wrote:
| ERROR: 0.25oz of liquid remains in your Adobe(tm) x Mountain
| Dew(tm) Verification Can(tm). Please finish drinking to
| proceed using the software product. Thank you for your
| compliance.
| Traubenfuchs wrote:
| At this point you could just call it microtransaction. Imagine
| free, but extremely bare bones implementations of photoshop
| where anything but one brush and eraser is hidden behind
| individual or package deals.
| xu_ituairo wrote:
| "Generative credits provide priority processing of generative AI
| content across features powered by Firefly in the applications
| that you are entitled to. Generative credit counts reset each
| month."
| slowhadoken wrote:
| Create a problem, charge for the solution.
___________________________________________________________________
(page generated 2023-09-16 23:00 UTC)