https://old.reddit.com/r/StableDiffusion/comments/wyduk1/show_rstablediffusion_integrating_sd_in_photoshop/ jump to content my subreddits edit subscriptions * popular * -all * -random * -users | * AskReddit * -funny * -worldnews * -news * -tifu * -movies * -gaming * -pics * -nottheonion * -todayilearned * -mildlyinteresting * -aww * -explainlikeimfive * -videos * -science * -Jokes * -dataisbeautiful * -TwoXChromosomes * -OldSchoolCool * -gifs * -IAmA * -books * -LifeProTips * -Showerthoughts * -Music * -space * -askscience * -Futurology * -Art * -UpliftingNews * -gadgets * -DIY * -nosleep * -sports * -food * -history * -creepy * -photoshopbattles * -Documentaries * -InternetIsBeautiful * -WritingPrompts * -EarthPorn * -philosophy * -GetMotivated * -announcements * -listentothis * -blog more >> reddit.com StableDiffusion * comments * other discussions (7) Want to join? Log in or sign up in seconds.| * English [ ][] [ ]limit my search to r/StableDiffusion use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example.com find submissions from "example.com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW e.g. subreddit:aww site:imgur.com dog see the search faq for details. advanced search: by author, subreddit... this post was submitted on 26 Aug 2022 1,085 points (99% upvoted) shortlink: [https://redd.it/wydu] [ ][ ] [ ]remember mereset password login Submit a new link Submit a new text post Get an ad-free experience with special benefits, and directly support Reddit. get reddit premium StableDiffusion joinleave12,831 readers 1,005 users here now a community for 1 month MODERATORS * message the mods * Moderator list hidden. Learn More discussions in r/StableDiffusion <> X 297 * 65 comments [hiYwF3yo] 0:57 I tried to make some fantasy concept art with img2img! 152 * 15 comments [XL4y7HUZ] 0:54 I will make my own Dall-E, with blackjack and hookers 141 * 12 comments [bjiqspyr] 0:59 Sculpture Trip 77 * 33 comments [fnHgT0xp] 1:18 Frame by frame img2img Video 243 * 22 comments [LiMjk5Kl] I very much regret this. Img2Img 28 * 5 comments [ULejwfog] I can't get enough of these beautiful caves.. 50 * 7 comments [T20n6LFl] Another princess of the night. 31 [mTu36UqT] A gameplay screenshot of Elton John as an NPC in The Elder Scrolls IV: Oblivion. 31 * 2 comments [f7g27oa6] Ted Cruz looking fabulous on the cover of a particular fashion magazine 43 * 6 comments [LYnrfFzT] [R] Trying to replicate mirrors with img2img Welcome to Reddit, the front page of the internet. Become a Redditor and join one of thousands of communities. x 1084 1085 1086 [lVeg6Wl7] 1:34 Show r/StableDiffusion: Integrating SD in Photoshop for human/AI collaboration (v.redd.it) submitted 1 day ago by alpacaAI[Timeless_4][5izbv4fn0m][Illuminati] [gold_48][j3azv69qjf][silver_48]& 2 more * 155 comments * share * save * hide * report Dismiss this pinned window all 155 comments sorted by: best topnewcontroversialoldrandomq&alive (beta) [ ] Want to add to the discussion? Post a comment! Create an account [-]Ok_Entrepreneur_5833 93 points94 points95 points 1 day ago (26 children) Now that's some next level creative thinking. I'd use this incessantly. I have a couple of questions though, is this using the GPU of the pc with the photoshop install or using some kind of connected service to run the SD output? I wonder because if it's using the local GPU it would limit images to 512x512 for most people, having photoshop open and running SD locally is like 100% utilization of an 8gb card's memory is why I ask this in my thoughts. I know even using half precision optimized branch, if I open PS then I get an out of memory error in conda when generating above 512x512 on an 8gb 2070 super. * permalink * embed * save * report * give award * reply [-]alpacaAI[S] 55 points56 points57 points 1 day ago (24 children) is this using the GPU of the pc with the photoshop install or using some kind of connected service to run the SD output? The plugin is talking to a hosted backend running on powerful GPUs that do support large output size. Most people don't have a GPU, or a GPU not powerful enough to give a good experience of bringing AI into their workflow (you don't want to wait 3 minutes for the output), so a hosted service is definitely needed. However for the longer term I would also like to be able to offer using your own GPU if you already have one. I don't want people to pay for a hosted service they might not actually need. * permalink * embed * save * parent * report * give award * reply [-]bitemynipple 17 points18 points19 points 1 day ago (9 children) you don't want to wait 3 minutes That's why I'm waiting 4-5 min for a single image instead * permalink * embed * save * parent * report * give award * reply [-]Megneous 1 point2 points3 points 22 hours ago (3 children) How is that possible? I'm running a GTX 1060 and it only takes about 1~1.5 minutes to generate a 512x512 image. * permalink * embed * save * parent * report * give award * reply [-]Peemore 3 points4 points5 points 20 hours ago (2 children) They could be pumping up the number of steps, or maybe a higher resolution. * permalink * embed * save * parent * report * give award * reply continue this thread [-]Particular-Way-3945 0 points1 point2 points 2 hours ago (4 children) i wait 4 seconds, what hardware are you on? LOL * permalink * embed * save * parent * report * give award * reply [-]bitemynipple 1 point2 points3 points 1 hour ago (3 children) Good for you, Mr. Moneybags * permalink * embed * save * parent * report * give award * reply continue this thread [-]twat--waffle 19 points20 points21 points 1 day ago (0 children) I just applied for the beta and threw my credit cards at my monitor. I hope it works. I thought the chain and UI I put together for myself was impressive, this is insane. * permalink * embed * save * parent * report * give award * reply [-]MustacheEmperor 7 points8 points9 points 20 hours ago (0 children) This could be an incredibly lucrative product in no time. Your total addressable market is almost everyone with a Photoshop license and they all are used to paying a subscription fee already. The only question is how many of them will be subscribed when Adobe offers to buy you. * permalink * embed * save * parent * report * give award * reply [-]Laladelic 4 points5 points6 points 1 day ago (3 children) What's you're pricing model gonna look like? * permalink * embed * save * parent * report * give award * reply [-]alpacaAI[S] 8 points9 points10 points 1 day ago (2 children) Not sure yet, I have no interest in trying to make a crazy margin but GPUs are still pretty expensive resources no matter what. Probably similar range of prices to what you would get on Midjourney. * permalink * embed * save * parent * report * give award * reply [-]Additional-Cap-7110 0 points1 point2 points 29 minutes ago (0 children) I heard Midjouney beta was using SD backend. How did that happen? * permalink * embed * save * parent * report * give award * reply [-]Ok_Entrepreneur_5833 2 points3 points4 points 1 day ago (0 children) Cool thanks for the answer, I'd subscribe to this if the price made sense for my budget even though SD is running locally (for free) on my machine, since like I said I'd use it incessantly for iteration. Personally makes a lot of sense for my own workflow to have this. * permalink * embed * save * parent * report * give award * reply [-]override367 2 points3 points4 points 1 day ago (0 children) on a 3070 a 15 pass 512x512 only takes about 2 and a half seconds, and even at 15 pass would blow content aware fill out of the water, I just wish there was a way to host this yourself and get this same functionality * permalink * embed * save * parent * report * give award * reply [-]2C104 1 point2 points3 points 1 day ago (0 children) Would this work with Photoshop CS 6.5? * permalink * embed * save * parent * report * give award * reply [-]animemosquito 0 points1 point2 points 17 hours ago (0 children) SD is a lot less intense than other models. I can gen a 512x512 at 50 iter in only 9 seconds on my RTX 2070 Super from 3 years ago * permalink * embed * save * parent * report * give award * reply [-]iamRCB 0 points1 point2 points 15 hours ago (0 children) How do I get this please let me havveee thiiis * permalink * embed * save * parent * report * give award * reply [-]i_have_chosen_a_name 0 points1 point2 points 11 hours ago (0 children) Could you make a version can can work with collab pro+??? I only have a crappy 2012 laptop with win 7, but collab pro+ allows me to still create but not very user friendly. Could I become one of your beta testers? * permalink * embed * save * parent * report * give award * reply [-]halr9000 0 points1 point2 points 10 hours ago (0 children) 50 iterations SD takes about 13-15 seconds on my 3090 running locally. * permalink * embed * save * parent * report * give award * reply [-]Particular-Way-3945 0 points1 point2 points 2 hours ago (0 children) I would definitely prefer to use my own GPU, a lot of us who do photo manipulation/designs use high-end hardware like 3090s for multitudes of reasons, this would be another useful application of it. Also, any chance of releasing it for clip studio paint? lots of graphics designers prefer using CSP over PS and that'd be such a useful tool ^^ * permalink * embed * save * parent * report * give award * reply [-]happysmash27 0 points1 point2 points 4 hours ago (0 children) I wonder how hard/slow it would be to run Stable Diffusion on CPU instead? It would take longer for sure, but given how much easier it is to upgrade system memory than VRAM, could remove the memory bottleneck. * permalink * embed * save * parent * report * give award * reply [-]KingdomCrown 85 points86 points87 points 1 day ago (11 children) I'm stunned by all the amazing projects coming out and it hasn't even been a week since release. The world in 6 months is going to be a totally different place. * permalink * embed * save * report * give award * reply [-]blueSGL 28 points29 points30 points 1 day ago (3 children) I'm waiting for people to start sharing 'tuned' versions of the weights or individually trained 'tokens' that's when the real shit starts. as in, [x] was never in the initial training set. No worry get tuned weights [y] or add on token [z] and it will now be able to generate [x] * permalink * embed * save * parent * report * give award * reply [-]axloc 14 points15 points16 points 1 day ago (2 children) as in, [x] was never in the initial training set. No worry get tuned weights [y] or add on token [z] and it will now be able to generate [x] That is already here with personalized textual inversion. You can train your own "mini model". This popular repo already has it integrated. * permalink * embed * save * parent * report * give award * reply [-]blueSGL 6 points7 points8 points 1 day ago (1 child) yep but for those without a powerful enough GPU to train the mini model having access to those that others decide to train would be the goal. an online database of snap ins for charters/shows/etc... that were never in the initial set. * permalink * embed * save * parent * report * give award * reply [-]axloc 1 point2 points3 points 1 day ago (0 children) Very true! * permalink * embed * save * parent * report * give award * reply [-]Ok_Entrepreneur_5833 17 points18 points19 points 1 day ago (1 child) Truly. Since just Monday when this was officially released it's literally every day something ground breaking comes through right after. Img2img, esrgan and gfpgan integration, weighting prompts, this plugin. Wonder what a year out will look like for sure. * permalink * embed * save * parent * report * give award * reply [-]camdoodlebopPrompt sharing samaritan 7 points8 points9 points 22 hours ago (0 children) dreambooth by google ai just happened today, it's not a public release but an unreleased github where you can take multiple photos of a subject and create new contexts with the same subject * permalink * embed * save * parent * report * give award * reply [-]camdoodlebopPrompt sharing samaritan 12 points13 points14 points 22 hours ago (2 children) 2022 feels a lot like 2006 in terms of major technological change * permalink * embed * save * parent * report * give award * reply [-]RedditorAccountName 1 point2 points3 points 8 hours ago (1 child) Excuse my ignorance and bad memory, but what happened in 2006? The iphones? * permalink * embed * save * parent * report * give award * reply [-]wrong_assumption 0 points1 point2 points 55 minutes ago (0 children) There was no big change, it was just several technologies coalescing together. * permalink * embed * save * parent * report * give award * reply [-]Megneous 3 points4 points5 points 21 hours ago (0 children) I. Love. Open. Source. The community and innovation is astounding. * permalink * embed * save * parent * report * give award * reply [-]rservello 0 points1 point2 points 23 hours ago (0 children) Imagine a year * permalink * embed * save * parent * report * give award * reply [-]enn_nafnlaus 29 points30 points31 points 1 day ago* (35 children) Would love something like this for GIMP. Quick question: how are you doing the modifier weights, like "Studio Ghibli:3"? I assume the modifiers are just postpended with a period, like "A farmhouse on a hill. Studio Ghibli". But how do you do the "3"? * permalink * embed * save * report * give award * reply [-]blueSGL 18 points19 points20 points 1 day ago (27 children) there was a fork that added that recently, it's been combined into the main script on 4ch /g/ anything before the : is taken as the prompt, the number immediately after is the weight, you can stack as many as you like then the code normalizes so all weights to add up to 1 and it gets processed. * permalink * embed * save * parent * report * give award * reply [-]terrible_idea_dude 7 points8 points9 points 1 day ago (2 children) I'm always surprised how much of the open source AI community hangs around the chans. First it was eleutherAI and novelAI and now I keep seeing stablediffusion stuff that eventually leads back to some guys on /g/ or /vg/ trying to get it to generate furry porn * permalink * embed * save * parent * report * give award * reply [-]newoneyaya1555[] 14 points15 points16 points 1 day ago (0 children) "The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." * permalink * embed * save * parent * report * give award * reply [-]zr503 1 point2 points3 points 12 hours ago (0 children) 1% of any community are on 4chan. For the open source AI community that would be over a million people in a broad sense, and over 100k people in the narrow sense that they have published research. there's only maybe ten people on that post the guides or comments with in-depth information. * permalink * embed * save * parent * report * give award * reply [-]enn_nafnlaus 2 points3 points4 points 1 day ago (12 children) Man, can't wait until my CUDA processor arrives and I can start running fresh releases locally with full access to all the flags! (Assuming it actually works... my motherboard is weird, the CUDA processor needs improvised cooling, shipping to Iceland is always sketchy, etc etc...) * permalink * embed * save * parent * report * give award * reply [-]twat--waffle 1 point2 points3 points 1 day ago (11 children) Which card did you order? * permalink * embed * save * parent * report * give award * reply [-]enn_nafnlaus 20 points21 points22 points 1 day ago*[bph2png4aj] (10 children) Nvidia Tesla M40, 24GB VRAM. As much VRAM as a RTX 3090, and only ~$370 on Amazon right now (though after shipping and customs it'll cost me at least $600... yeay Iceland! :Th ). They're cheap because they were designed for servers with powerful case fans and have no fan of their own, intending on using unidirectional airflow through the server for passive cooling. Since servers are now switching to more modern CUDA processors like the A100, older ones like the M40 are a steal. My computer actually uses a rackmount server case with six large fans and 2 small ones - though they're underpowered (it's really just a faint breeze out the back) - so I'm upgrading three of the large ones fans (to start) to much more powerful ones, blocking off unneeded holes with tape, and hoping that that will handle the cooling aspect. Fingers crossed! There's far too little room for the card in the PCI-E x16 slot that's built into my weird motherboard, so I also bought a riser card with two PCI-E x16 slots on it. But this will make the card horizontal, so how it will interact with the back of the case (or whether it'll run into something else) is unclear. Hoping I don't have to "modify" the case (or the card!) to make it all fit... * permalink * embed * save * parent * report * give award * reply continue this thread [-]namrog84 1 point2 points3 points 1 day ago (2 children) Do you know which forks? * permalink * embed * save * parent * report * give award * reply [-]Sirisian 0 points1 point2 points 18 hours ago* (0 children) https://github.com/lstein/stable-diffusion#weighted-prompts I believe this one, but for all I know there's multiple now. * permalink * embed * save * parent * report * give award * reply [-]No-Intern2507 -3 points-2 points-1 points 1 day ago* (6 children) 4ch /g/ YOu throw that in casually without any link haha, where i can find it ? do youremember ? Ah you meant a fork of SD , not a fork of gimp.... * permalink * embed * save * parent * report * give award * reply [-]blueSGL 0 points1 point2 points 1 day ago (5 children) use the catalog. /sdg/ always linked in the first post. * permalink * embed * save * parent * report * give award * reply [-]No-Intern2507 0 points1 point2 points 1 day ago* (4 children) what catalog for what ? whats linked ? I have SD runnig in stable diffusion GUI already and im training my own images, i think you were saying that gimp had stable diffusion plugin already working but thats not the case i cant find it anywhere Ah you guys just chatting about the duck:04 elephant :0.6 thing ok.... * permalink * embed * save * parent * report * give award * reply continue this thread [-]MostlyRocketScience 3 points4 points5 points 1 day ago (1 child) Afaik GIMP plugins are programmed in Python, so this might be fairly easy to do. * permalink * embed * save * parent * report * give award * reply [-]enn_nafnlaus 6 points7 points8 points 1 day ago* (0 children) I think it would ideally be a plugin that creates a tool, since there's so many parameters you could set and you'd want to have it docked in your toolbar for easy access to them. The toolbar should have a "Select" convenience button to create a 512x512 movable selection for you to position. When you click "Generate to New Layer" or "Generate To Current Layer" , it would then need to flatten everything within the selection into the clipboard, and then save that in a temp directory for the img2img call. It'd then need to load the output of img2img into a new layer. And I THINK that would do the trick - the user should be able to take care of everything else, like how to blend layers together and whatnot. The layer name or metadata should ideally include all of the parameters (esp. the seed) so the plugin could re-run the layer at any point with slightly different parameters (so in addition to the two Generate buttons, you'd need one more: "Load from Current Layer", so you could tweak parameters before clicking "Generate To Current Layer"). As for calling img2img, we could just presume that it's in the path and the temp dir is local. But it'd be much more powerful if commandlines could be specified and temp-directories were sftp-format (servername:path), so that you could run SD on a remote server. One question would be what happens if the person resizes the selection from 512x512, or even makes some weird-shaped selection. The lazy and easy answer would be, "fail the operation". A more advanced version would be to make multiple overlapping calls to img2img and make each one its own layer, with everything outside the selection deleted. Leave it up to the user as how to blend them together, as always. (I say "512x512", but the user should be able to choose whatever img2img resolution they want to run... with the knowledge that if they make it too large, the operation may fail) * permalink * embed * save * parent * report * give award * reply [-]74qwewq5rew3 4 points5 points6 points 1 day ago (1 child) Krita would be better * permalink * embed * save * parent * report * give award * reply [-]enn_nafnlaus 0 points1 point2 points 1 day ago (0 children) It would not be because it's not the software I use. You might as well say "photoshop would be better". * permalink * embed * save * parent * report * give award * reply [-]jaywv1981 2 points3 points4 points 22 hours ago (0 children) Yeah if this existed for Gimp I might cancel my photoshop subscription lol. * permalink * embed * save * parent * report * give award * reply [-]zr503 1 point2 points3 points 12 hours ago (0 children) there's this: https://kritiksoman.github.io/GIMP-ML-Docs/index.html haven't tested, seems to be a general framework that can be extended easily * permalink * embed * save * parent * report * give award * reply [-]namrog84 0 points1 point2 points 1 day ago (0 children) I think some of them support some limited emphasis on words with the usage of ! and each ! equates to a +1 value So you might do something like A green!! farmhouse on a hill And it would increase the weight of 'green'. Though I think I am talking about something subtly different. * permalink * embed * save * parent * report * give award * reply [-]hungrydonke 20 points21 points22 points 1 day ago (0 children) wow! amazing job! I'd love it for Affinity too! [?] * permalink * embed * save * report * give award * reply [-]daikatana 19 points20 points21 points 1 day ago (0 children) Commercial art is changed forever. If it works this smoothly this early, then think about what this will be in 1 or even 10 years. * permalink * embed * save * report * give award * reply [-]SpeakingPegasus 35 points36 points37 points 1 day ago (0 children) For anyone interested: https://www.getalpaca.io/ You can register for an invite to the beta for this photoshop plugin. * permalink * embed * save * report * give award * reply [-]PUBGM_MightyFine 14 points15 points16 points 1 day ago (0 children) [Xvi2JU-h8Qu7eKQ7d931hIonEQNUyKyit-yYMy7-1CE] Every artist watching the video * permalink * embed * save * report * give award * reply [-]Kaarssteun 11 points12 points13 points 1 day ago (0 children) This is the magic of open source software. Just five days after the release, we already see amazing implementations like this. * permalink * embed * save * report * give award * reply [-]LETS_RETRO_TIME 8 points9 points10 points 1 day ago (1 child) I need to use this on GIMP * permalink * embed * save * report * give award * reply [-]Magnesus 0 points1 point2 points 1 day ago (0 children) Maybe G'mic will have something like that one day... * permalink * embed * save * parent * report * give award * reply [-]Dachannien 9 points10 points11 points 16 hours ago (0 children) You should document this extremely well and extremely publicly, because this is the kind of thing that Adobe will make some button for it in Photoshop and then try to get all sorts of patents on it. * permalink * embed * save * report * give award * reply [-]shitboots 6 points7 points8 points 1 day ago (0 children) Tempted to post this thread to HN but I'm sure you'll be making your own post when ready. It's amazing how quickly this is all moving. Hopefully the cambrian explosion in this ecosystem within a week of the public weights is proof-of-concept to the ML community writ large that this is how foundational models should be released. * permalink * embed * save * report * give award * reply [-]axloc 5 points6 points7 points 1 day ago (0 children) This is fucking insane. 20 years ago I could have never imagined anything like this in photoshop. I thought content aware fill was magic but this is just next level stuff. * permalink * embed * save * report * give award * reply [-]_swnt_ 5 points6 points7 points 1 day ago (0 children) This is the future! * permalink * embed * save * report * give award * reply [-]Jrowe47 10 points11 points12 points 1 day ago (4 children) I think artists are going to be producing some massively creative and wonderful things in the near future. This is awesome! * permalink * embed * save * report * give award * reply [-]CyborgJiro 2 points3 points4 points 17 hours ago (3 children) They already do without ai * permalink * embed * save * parent * report * give award * reply [-]DeviMon1 2 points3 points4 points 12 hours ago (2 children) Yeah, but this cuts down the time to do anything by multiple magnitudes if you use it right. * permalink * embed * save * parent * report * give award * reply [-]gerberly 0 points1 point2 points 10 hours ago (1 child) Piggybacking on cyborgjiro's comment - people seem to forget that a vast amount of enjoyment for artists comes from applying those brush strokes/being the one in the drivers seat (and this 'enjoyment' can directly transfer to the art. If you browse a concept artists portfolio and try spotting the best quality pieces, they usually correlate with how much the the artist was enjoying the process at the time). I don't doubt the incredible nature of this tech, but the artistic process seems akin to using the content aware tool on an entire artwork ie. dull as dishwater. * permalink * embed * save * parent * report * give award * reply [-]DeviMon1 1 point2 points3 points 10 hours ago (0 children) True, but this has potential to make digital drawing more accessible than ever. Imagine an AI brush that you can tell to draw "trees" in any style you'd like, you could fill in landscape drawings so easily. And it wouldn't just be copy pasted ones, every tree would be unique and as detailed as you want them to be. And the thing is, instead of trees it can be anything. And in any style, you'll be able to show an AI any piece of any artwork and ask it for something similar. Instead of a color picker it's going to be a style picker or however you want to call it. The potential for AI x digital drawing is massive, I do agree that completely 100% AI drawn art loses some of that magic, but AI tools that someone with an artistic vision can use on top of already drawing have so much potential it's crazy. * permalink * embed * save * parent * report * give award * reply [-]FrezNelson 5 points6 points7 points 1 day ago (2 children) This might sound stupid, but I'm curious how you manage to keep the generated images at the same level of perspective? * permalink * embed * save * report * give award * reply [-]alpacaAI[S] 11 points12 points13 points 1 day ago (1 child) Do you mean how to keep the perspective coherent from back to front? Actually I thought the perspective here was pretty bad so I'm happy you think otherwise :D. I had a general idea that i wanted a hill, and a path going around and up that hill, with the dog on the path etc. So my prompts followed that, the hill being the first thing I generated and then situating the other prompts in relation to the hill (a farm next to a hill, a path leading to a hill etc).Then when generating new images, cutting out the parts that clearly don't fit the perspective I want (In the video i'm only keeping the bottom half part of the path, as the top half doesn't fit the perspective). Once you kind of have the contour of images, you can "link" them with inpainting, e.g. the bottom of the hill and the middle of the path with a blank in the middle, and that will suggest the model to come up with something that fits the perspective.I say suggest because sometimes you get really bad results, in the video around 1:49 mark and after you can see that the model is struggling to generate a coherent center piece, so you have to retry, erase some things that might misled the model, or add other things. Better inpainting and figuring out a way to "force" perspective are actually two things I want to improve. * permalink * embed * save * parent * report * give award * reply [-]SpaceShipRat 1 point2 points3 points 10 hours ago (0 children) I think just making a smaller image then zooming in to paint details could have helped for the perspective, but I do also enjoy the slightly surreal Escher nature of the finished picture. * permalink * embed * save * parent * report * give award * reply [-]babblefish111 2 points3 points4 points 1 day ago (0 children) Witchcraft ! * permalink * embed * save * report * give award * reply [-]vrrtvrrt 3 points4 points5 points 1 day ago (2 children) That is off-the-wall good. Do you have plans for other applications the plugin can work within, or just PS? * permalink * embed * save * report * give award * reply [-]alpacaAI[S] 14 points15 points16 points 1 day ago (1 child) Hopefully more than just PS :) Main bottleneck is time, not technical. I am trying to abstract away all the logic related to PS itself so that it should be fairly easy to port this to GIMP/Figma/ whatever. * permalink * embed * save * parent * report * give award * reply [-]bozezone 5 points6 points7 points 1 day ago (0 children) This is WILD. I would love to use this for film and TV concept art and spicing up my screenplay pitches. Just signed up for the beta * permalink * embed * save * parent * report * give award * reply [-]strppngynglad 3 points4 points5 points 1 day ago (0 children) oh man When will this be available? * permalink * embed * save * report * give award * reply [-]KingdomCrown 2 points3 points4 points 1 day ago (4 children) OP you should post this on an art subreddit like r/digitalart or r/ photoshop too! * permalink * embed * save * report * give award * reply [-]Trakeen 7 points8 points9 points 1 day ago* (3 children) r/DigitalArt seems to be against AI art generation (which makes no sense since integration with photoshop was an obvious thing that was going to happen, and photoshop already has the neural filters which are pretty handy) * permalink * embed * save * parent * report * give award * reply [-]camdoodlebopPrompt sharing samaritan 6 points7 points8 points 22 hours ago (0 children) give them a year and i'm sure they'll be letting ai art in * permalink * embed * save * parent * report * give award * reply [-]agorathird 3 points4 points5 points 1 day ago (0 children) It's not personal to Ai prompted art. Even though it's not the same thing, a lot of other art subs don't allow photobashing. Communities are usually bound by what kind of method is used for the final result. Most strictly allow draftmanship and painting. * permalink * embed * save * parent * report * give award * reply [-]sneakpeekbot 0 points1 point2 points 1 day ago (0 children) Here's a sneak peek of /r/DigitalArt using the top posts of the year! #1: My first Blender outcome! What do you think? | 61 comments #2: @Blackholed | 62 comments #3: Been perma-banned from r/art for doing "Fan Art". Hope r/ Digitalart will enjoy this "Do Androids Dream of Electric Sheep?", a non-fanart piece by me. | 146 comments --------------------------------------------------------------------- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^ Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub * permalink * embed * save * parent * report * give award * reply [-]Trakeen 2 points3 points4 points 1 day ago (0 children) is this using the new plugin market place thing adobe released? Is the adobe API open to everyone? * permalink * embed * save * report * give award * reply [-]According_Abalone_68 2 points3 points4 points 1 day ago (0 children) Amazing project congrats * permalink * embed * save * report * give award * reply [-]thotslayr47 2 points3 points4 points 1 day ago (0 children) oh wow that's amazing * permalink * embed * save * report * give award * reply [-]Consistent-Loquat936 1 point2 points3 points 1 day ago (0 children) How??? * permalink * embed * save * report * give award * reply [-]malcolmrey 1 point2 points3 points 1 day ago (0 children) this is simply amazing * permalink * embed * save * report * give award * reply [-]magicaleb 1 point2 points3 points 1 day ago (0 children) Where's the final product?? * permalink * embed * save * report * give award * reply [-]oaoao 1 point2 points3 points 23 hours ago (0 children) bravo, guys. Will be exciting to see the UX improve on these kind of systems, especially as SD in-painting is released. * permalink * embed * save * report * give award * reply [-]rservello 1 point2 points3 points 23 hours ago (3 children) Are you retaining the same seed? I'm guessing that's how all pieces look the same. * permalink * embed * save * report * give award * reply [-]alpacaAI[S] 5 points6 points7 points 23 hours ago (2 children) Yes, always the same seed to get a coherent vibe. That's a global setting you chose, but I will also add a way to easily change it for a specific generation. Working with the same seed generally makes things much easier as you said, but sometimes, especially for inpainting, you might get a result that really doesn't fit, and trying to change that with just the prompt while keeping the seed the same is not really super effective. It's easier to just change the seed to have the 'structure' in the noise that is leading the model in the wrong direction go away. * permalink * embed * save * parent * report * give award * reply [-]camdoodlebopPrompt sharing samaritan 1 point2 points3 points 22 hours ago (0 children) can you post the final image? * permalink * embed * save * parent * report * give award * reply [-]rservello 0 points1 point2 points 23 hours ago (0 children) Can't wait to try it!!! * permalink * embed * save * parent * report * give award * reply [-]DecentFlight2544 1 point2 points3 points 22 hours ago (0 children) It's great for adding elements to an image, how does it do at taking elements out? * permalink * embed * save * report * give award * reply [-]progfu 1 point2 points3 points 22 hours ago (1 child) What inpainting variant do you use? Seems much better than the inpaint.py available in the SD repo. Also, it'd be very nice if this allowed running it locally. * permalink * embed * save * report * give award * reply [-]alpacaAI[S] 2 points3 points4 points 18 hours ago (0 children) It's my own implementation, inpainting from SD or Huggingface wasn't available when I made this video, heard they came out today. Haven't had time to check their implementation but I suspect we all do the same things based on Repaint paper. One thing that make inpainting work well here, is that I use a "soft" brush to erase the parts I want to inpaint, this means there is soft transition between masked and unmasked part. If you have a straight line or other hard edges at the limitation the results will almost always be terrible, because the model will consider that edge to be a feature of the image and try to make something out of it, like a wall. It should be fairly easy to pre-process the image to remove any hard edge before inpainting, if I have time to do it before someone else does, would be happy to contribute that to SD/Diffusers. * permalink * embed * save * parent * report * give award * reply [-]jerkosaur 1 point2 points3 points 21 hours ago (0 children) Awesome work! I was thinking about making something like this but your implementation looks fantastic! I was going to pair colour masks with prompts before running updates to reduce iterations as much as possible. Great looking app * permalink * embed * save * report * give award * reply [-]Acrobatic-Animal2432 1 point2 points3 points 16 hours ago (0 children) Celery man won't be too far from now * permalink * embed * save * report * give award * reply [-]Space_art_Rogue 1 point2 points3 points 13 hours ago (0 children) That's insane! Reminds me of how people thought digital art was made 15 years ago so much for trying to educate them. Btw would this be able in other apps like Clip studio, Krita or Affinity Photo? * permalink * embed * save * report * give award * reply [-]karlwikman 1 point2 points3 points 8 hours ago (0 children) I have never been so excited for a photoshop plugin. Please, please, please make this available as a straightforward and easy install that doesn't require any python commands to be run by the user - just an exe to execute. * permalink * embed * save * report * give award * reply [-]hauntedhivezzz 1 point2 points3 points 1 day ago (1 child) This is exactly where I saw this going in 3-6 months time, can't believe you've already got something like this working. I just hope the Adobe cease and desist doesn't come after you (they are working on this I'm sure and want to control it/ monetize it themselves ..ya know, for shareholders /s) * permalink * embed * save * report * give award * reply [-]GoodToKnowYouAll 2 points3 points4 points 23 hours ago (0 children) Why would they have cease and desist for this? Patents on content aware fill? * permalink * embed * save * parent * report * give award * reply [-]Felixo22 -1 points0 points1 point 20 hours ago (0 children) C'mon Adobe, buy this guy! * permalink * embed * save * report * give award * reply [+]newandgood comment score below threshold-12 points-11 points-10 points 1 day ago (9 children) can you explain to me why you would do this? i would assume most artists like to make art, try to be original, not create derivate works from instagram, anime, deviantart, etc. just because it's easy for algorithm to generate it for you. * permalink * embed * save * report * give award * reply [-]alpacaAI[S] 13 points14 points15 points 23 hours ago (0 children) Hey, I didn't build this tool thinking artists will stop doing what they do and just generate things instead. I certainly hope that's not the case and I don't think it will be. I also don't have any expectation of why you would use it or not. I guess if some people find this cool they will use it for their own reasons, maybe they can't draw but still like to create, maybe they are artists that are very good at drawing, but want to be able to create much larger universe than you would realistically be able to do alone. Or a thousand other reasons. Or maybe no one will want to use it and that's ok too. One thing to keep in mind, in the video I am using a predefined style from someone else (studio Ghibli) and the AI is doing 90% of the work. That's not because I think it's the 'right' way of using the tool, it's because I personally sadly have 0 artistic skills. * permalink * embed * save * parent * report * give award * reply [-]camdoodlebopPrompt sharing samaritan 4 points5 points6 points 22 hours ago (0 children) because it's fun. * permalink * embed * save * parent * report * give award * reply [-]zr503 3 points4 points5 points 12 hours ago (6 children) pretty unfair of photographers to just take pictures in a few seconds, instead of drawing portraits like we've done for centuries. * permalink * embed * save * parent * report * give award * reply [-]newandgood -3 points-2 points-1 points 12 hours ago (1 child) omg. hope you are not as dumb in reality as you were in your comment. * permalink * embed * save * parent * report * give award * reply [-]zr503 2 points3 points4 points 12 hours ago (0 children) no1curr. I'm hot and get paid seven figures. * permalink * embed * save * parent * report * give award * reply [-]jjjuniorrr 0 points1 point2 points 8 hours ago (3 children) You are ignoring all the work a photographer does before and after a photo gets taken * permalink * embed * save * parent * report * give award * reply [-]zr503 0 points1 point2 points 8 hours ago (2 children) I just tap my phone. * permalink * embed * save * parent * report * give award * reply [-]jjjuniorrr 0 points1 point2 points 8 hours ago (1 child) I'm sure every photo you take like that is comparable to that of a professional photographer. * permalink * embed * save * parent * report * give award * reply continue this thread [-]thewaywrdsun 0 points1 point2 points 1 day ago (0 children) Incredible man, can't believe how fast you put this together * permalink * embed * save * report * give award * reply [-]gofilterfish 0 points1 point2 points 1 day ago (0 children) Absolutely amazing. * permalink * embed * save * report * give award * reply [-]Microwave_Ramen 0 points1 point2 points 1 day ago (1 child) u/savevideo * permalink * embed * save * report * give award * reply [-]SaveVideo 0 points1 point2 points 1 day ago (0 children) View link --------------------------------------------------------------------- Info | Feedback | Donate | DMCA | ^reddit video downloader | ^ download video tiktok * permalink * embed * save * parent * report * give award * reply [-]adamraudonis 0 points1 point2 points 1 day ago (0 children) This is mind blowing!!! * permalink * embed * save * report * give award * reply [-]Zestyclose-Raisin-66 0 points1 point2 points 23 hours ago (0 children) Is there a waiting list? * permalink * embed * save * report * give award * reply [-]camdoodlebopPrompt sharing samaritan 0 points1 point2 points 23 hours ago (0 children) this is so cool * permalink * embed * save * report * give award * reply [-]JimMorrisonWeekend 0 points1 point2 points 19 hours ago (0 children) the world after the last art teacher has been fired: * permalink * embed * save * report * give award * reply [-]AffectionateAd785 0 points1 point2 points 19 hours ago (0 children) Sick as shit! You go! * permalink * embed * save * report * give award * reply [-]AffectionateAd785 0 points1 point2 points 18 hours ago (0 children) That is some serious shit. Move over Graphic Designers because AI just busted through the door. * permalink * embed * save * report * give award * reply [-]HeadClot 0 points1 point2 points 16 hours ago (0 children) Hey u/alpacaAI are there any plans for an Affinity Photo plugin? * permalink * embed * save * report * give award * reply [-]iamRCB 0 points1 point2 points 15 hours ago (0 children) I need this!!!! Omg so cool!!! * permalink * embed * save * report * give award * reply [-]FrikkudelSpusjaal 0 points1 point2 points 13 hours ago (1 child) Yeah, this is awesome. Signed up for the beta instantly * permalink * embed * save * report * give award * reply [-]junicks 0 points1 point2 points 7 hours ago (0 children) may i ask for a link? such a plugin could be a major game changer for my job. * permalink * embed * save * parent * report * give award * reply [-]Business_Formal_7113 0 points1 point2 points 10 hours ago (0 children) What style of art is this? * permalink * embed * save * report * give award * reply [-]mikiex 0 points1 point2 points 10 hours ago (0 children) Looks interesting way to edit, but end result = ungodly mess? * permalink * embed * save * report * give award * reply [-]Losspost 0 points1 point2 points 9 hours ago (0 children) What time did this take you to make ? And what would you have need doing something like this by hand in comparison ? * permalink * embed * save * report * give award * reply [-]zipzapbloop 0 points1 point2 points 9 hours ago (0 children) Man, I've been doing this in Photoshop by hand for a while now, and it's a huge pain in the ass. This would be absolutely incredible. Take my money. Would it be possible to use Colab GPUs? * permalink * embed * save * report * give award * reply [-]Garmenth 0 points1 point2 points 8 hours ago (0 children) That is some straight up magic * permalink * embed * save * report * give award * reply [-]PhillyGuyLooking 0 points1 point2 points 7 hours ago (0 children) Man I could use this for making the environments in a movie. * permalink * embed * save * report * give award * reply [-]tommy46136 0 points1 point2 points 5 hours ago (0 children) Nice * permalink * embed * save * report * give award * reply * about * blog * about * advertising * careers * help * site rules * Reddit help center * reddiquette * mod guidelines * contact us * apps & tools * Reddit for iPhone * Reddit for Android * mobile website * <3 * reddit premium * reddit coins Use of this site constitutes acceptance of our User Agreement and Privacy Policy. (c) 2022 reddit inc. All rights reserved. REDDIT and the ALIEN Logo are registered trademarks of reddit inc. [pixel] p Rendered by PID 31593 on reddit-service-r2-loggedout-7f554874ff-gpvv8 at 2022-08-27 23:01:57.513276+00:00 running 6c805f8 country code: US.