[HN Gopher] Stable Diffusion Public Release
___________________________________________________________________
Stable Diffusion Public Release
Author : flimsythoughts
Score : 481 points
Date : 2022-08-22 18:08 UTC (4 hours ago)
(HTM) web link (stability.ai)
(TXT) w3m dump (stability.ai)
| UncleOxidant wrote:
| > You can join our dedicated community for Stable Diffusion here:
| [https://discord.gg/stablediffusion]
|
| Oh, discord... I've had so many problems trying to log into
| discord in the last couple of years that I've given up on it.
| Kiro wrote:
| That's peculiar. I run a couple of big Discord servers and
| never heard anyone saying they have problems logging in.
| UncleOxidant wrote:
| First about 18 months ago they said I had an IP phone and
| could no longer log in (my carrier is Republic Wireless).
| When I contacted support and told them that I had been able
| to login with that number in the past they basically said
| "we're tightening up security, too bad so sad. get another
| phone number" Then recently I found that I was able to log in
| when I wanted to try midjourney (Republic changed something
| back in April, apparently so they it no longer looks like an
| IP phone#). Then I wanted to login to discord on the my
| desktop and it gave me some weird errors which basically
| amounted to "this number is already claimed" (it was, by me)
| so now I'm back to ignoring discord again.
| mabbo wrote:
| The most interesting part, to me, of a release like this is the
| amount of "please don't abuse this technology" pleading. No
| licence will ever stop people from doing things that the licence
| says they can't. There will always be someone who digs into the
| internals and makes a version that does not respect your hopes
| and dreams. It's going to be bad.
|
| As I see it, within a couple years this tech will be so
| widespread and ubiquitous that you can fully expect your asshole
| friends to grab a dozen photos of you from Facebook and then make
| a hyperrealistic pornographic image of you with a gorilla[0].
| Pandora's box is open, and you cannot put the technology back
| once it's out there.
|
| You can't simply pass laws against it because every country has
| its own laws and people in other places will just do whatever it
| is you can't do here.
|
| And it's only going to get better/worse. Video will follow soon
| enough, as the tech improves. Politics will be influenced. You
| can't trust anything you see anymore (if you even could before,
| since Photoshop became readily available).
|
| Why bother asking people not to? I guess if it helps you sleep at
| night that you tried, I guess?
|
| [0]A gorilla if you're lucky, to be honest.
| OmarIsmail wrote:
| We do a decent job of banning child pornography.
|
| And bringing the two ideas together, is child pornography that
| is provably created by an AI still illegal?
| seydor wrote:
| Actually ... we just did a good job of making generated child
| pornography ubiquitous.
| causi wrote:
| As far as I'm aware countries fall broadly into two camps.
| Camp 1, USA for example, is concerned purely with the abuse
| of children, i.e., anything that depicts or is constructed of
| pieces of real children is illegal but other things such as
| drawings, stories, adults role playing, etc is not. Camp 2
| outlaws any representation of it whether or not a child was
| involved.
|
| Nowhere will a training set featuring pictures of naked
| children be legal.
| 16890c wrote:
| >Nowhere will a training set featuring pictures of naked
| children be legal.
|
| True, but generalizing beyond the training set is precisely
| the point of machine learning. A good generative model will
| be able to produce such images, no matter how heinous the
| content is.
| OmarIsmail wrote:
| > Nowhere will a training set featuring pictures of naked
| children be legal.
|
| Appropriately from the recent news stories, but it's easy
| to imagine at least portions of such pictures being
| available for medical diagnostic purposes. I've sent
| pictures of my children to my doctor, so presumably in the
| future it's easy to imagine sending pictures to an AI to
| diagnose which would require a suitably fleshed out (pardon
| the pun) training set.
| mdcds wrote:
| > you can fully expect your asshole friends to grab a dozen
| photos of you from Facebook and then make a hyperrealistic
| pornographic image of you with a gorilla
|
| my prediction is that, as a result, people will start assuming
| pics online are fake until proven otherwise.
| lern_too_spel wrote:
| > my prediction is that, as a result, people will start
| assuming pics online are fake until proven otherwise.
|
| "That worked well for quotations." -- Abraham Lincoln
| fpgaminer wrote:
| If I recall from the interview with Stability.ai's founder, he
| has more or less the same opinion, and that humans will adapt
| to the new technology as we always have. I figure "please don't
| abuse this technology" warning stickers are more CYA. It'll
| make the vast majority of judges look at a motion to dismiss
| and not blink an eye.
| gitfan86 wrote:
| Historically, he is correct. It is easy to find people that
| were against TVs, Cars, Trains, Electric Cars. Those people
| were not entirely wrong with their logic. Trains and cars did
| make it much easier for scammers to come into a town and then
| leave quickly.
| theptip wrote:
| In terms of the equilibrium, this is certainly a true
| observation. However, historically speaking new technology
| can be extremely disruptive in the short-term as society
| figures out the new norms, and the power-structures are
| disrupted and then re-equilibrate.
|
| Concretely, it's probably true that children born with this
| technology will have adapted to many of the negative (and
| positive) aspects of it. But the current generation of
| elites, politicians, and voters might have a harder time
| adapting.
| patientplatypus wrote:
| oifjsidjf wrote:
| This.
|
| They just have to cover their asses, any sane dev would make
| the same license due to the power of this tech.
|
| On some level I can't stop laughing since OpenAI really got
| smoked. "OpenAI" my ass, this is what open TRULY means!
|
| Cheering for these devs.
| aortega wrote:
| >make a hyperrealistic pornographic image of you with a
| gorilla[0]
|
| I don't understand this irrational fear. This can be done
| today, just need some minutes instead of some seconds to create
| a good Photoshop.
|
| Also, seriously this is the thing you fear? fake porn? there
| are much worse thing you can do with this tech, like phising,
| falsifications, etc. Not mentioning leaving millions of graphic
| designers out of job.
| peoplefromibiza wrote:
| Unfortunately _Anything that can go wrong will go wrong_
|
| Photoshop is a skill, not very widespread as we assume.
|
| Typing something is _literally_ at everyone 's fingertips.
| cerol wrote:
| That's what I was thinking. Whether a pornographic picture of
| me and a gorilla was made by photoshop or AI is irrelevant.
| People's reactions will be the same, and repercussions will
| be mostly the same (which doesn't means there will be
| consequences).
|
| If someone really wants to hurt you, not having AI isn't
| going to stop them.
| aortega wrote:
| The real effect will be that you can publish a real picture
| of you fucking a gorilla and nobody would believe it
| because it's trivial to generate it with an AI.
| naillo wrote:
| I much prefer a company that asks me to not abuse it but lets
| me, then a company that treats me like child and force filters
| it out for me.
| skybrian wrote:
| It seems inevitable like it's "inevitable" that they Photoshop
| your face onto porn. Yes, of course it will happen but maybe
| not to most people? I'd guess inevitable for many celebrities.
| croes wrote:
| It will be more now. Photoshop needs some skills. With AI it
| gets easier and easier.
| eastbound wrote:
| But even today, we deal correctly with it. Fakes and real
| photos are mingled together in 9Gag/LatestNews reports
| about Ukraine. Under the fakes (and the real), people ask
| for confirmation. Someone says it's true, no-one believes
| him, until a link to a newspaper is dropped. And 9Gag isn't
| the highest IQ community around, so yes, general population
| does distrusts photos by default until proven.
|
| They are laughed at anyway if they tell a story coming from
| a forged photo.
|
| Sure, newspapers could forge stories, display pictures
| with, I don't know, Biden's son with a crackpipe, and make
| the populace believe untrue stories. But guess what, they
| already do it anyway, newspapers already "spin" (as they
| say, i.e. forge, suggest without literally saying) stories
| all the time.
|
| The world deals correctly with fakes.
| croes wrote:
| I have a quite different perception of 9gag. Yes, some
| ask for confirmation but it depends very much on the
| topic.
|
| Wrong topic and facts get downvoted and the fake news
| prevail.
|
| And not all links to newspaper are considered valid,
| especially if it's about "woke culture". Then you have to
| search the reasonable needle in the haystack of
| transphobia, homophobia and misogyny.
| gabereiser wrote:
| The problem comes from the early adult newsroom interns
| responsible for sourcing content. They don't know it's
| fake, it sounds like a good click-baity article to them,
| so they run it. It happens.
| eastbound wrote:
| I wouldn't shift responsibility on the shoulders of the
| last newcomer. The top of the management has had ample
| time to diagnose this. If it remains like this, it's by
| design.
| wwwtyro wrote:
| > Politics will be influenced. You can't trust anything you see
| anymore
|
| I've been wondering for a while now if this will lead to an
| unexpected boon: perhaps people will be forced to pay attention
| to a speaker's content instead of simply who is speaking.
| dbingham wrote:
| The problem with this is that you will never know who is
| actually speaking. Deep fakes are already a thing, but as
| they get better and more accessible we will approach a world
| where anyone can make anyone say anything and make it hyper
| believable. In that world, it will be very difficult to tell
| what is real.
| bluejellybean wrote:
| My personal hunch is that this will end up leading to a
| situation in which presenters do a cryptographic handshake
| that works to verify and prove authenticity. This isn't a
| new idea, and it has some very obvious drawbacks, but I
| don't see much of a way around the issue. The handshake
| could work great for something like official news releases,
| but for other instances that might come up in court, say,
| dash cam footage of an accident, it seems to me that the
| legal system is going to face some serious issues as these
| programs progress.
| nelsondev wrote:
| Looking forward to a future where all a politician's
| quotes are on a blockchain, signed by their private key,
| and they chose to do so voluntarily out of fear of deep
| fakes.
|
| Will remove all the useless "I didn't say that"
| dinosaurdynasty wrote:
| Do politicians understand blockchains?
| kelseyfrog wrote:
| Neither do we[1], so it's sort of a mixed bag.
|
| 1. True for an overwhelming majority of the body politic.
| nelsondev wrote:
| Rename it as "verified speech transcripts." I don't need
| to understand video codecs to watch Youtube.
| marak830 wrote:
| It will end up back where we started, "it's not on the
| Blockchain so I didn't say it", while making racist
| remarks with friends.
| swader999 wrote:
| That's a start! I think their video appearances should be
| like a car in NASCAR with permanently displayed logos
| superimposed from all the interests that have funded
| their rise.
| [deleted]
| edouard-harris wrote:
| Unfortunately a speaker's content can also be auto-generated
| now, at least for brief enough snippets. And that means the
| content can (and will) be optimized to appeal to a target
| segment much more than has ever been previously possible.
| narrator wrote:
| As someone who witnessed AI Dungeon's GPT-3 model with an
| unlimited uninhibited imagination for erotica I would download
| everything now before they cripple the models. I would not be
| surprised if they very shortly completely stop downloads due to
| "abuse" and pursue a SAAS model.
|
| I think it's funny how Yandex, a Russian company, releases
| these big language models without all the AI safety
| handwringing in their press releases. The Russians have a
| tradition of releasing technology without giving a lot of worry
| about what happens to it, for better or for worse. For example,
| they made between 75 and 100 million ak-47s, a not in any way
| limited machine gun, and it spread to every corner of the
| world. They even gave out all the plans and technical
| assistance so any one of the organizations they worked with
| could produce their own. 20 different countries make ak-47s
| currently. Of course you had to register every xerox machine in
| the Soviet Union, so maybe they just had different priorities?
|
| The west is absolutely fascinated these days with the control
| of advanced technology. Drones, Blockchain, and AI models seem
| to be the latest things that the west is determined to exercise
| control over. For example:
|
| "Many of the technological advances we currently see are not
| properly accounted for in the current regulatory framework and
| might even disrupt the social contract that governments have
| established with their citizens. Agile governance means that
| regulators must find ways to adapt continuously to a new, fast-
| changing environment by reinventing themselves to understand
| better what they are regulating. To do so, governments and
| regulatory agencies need to closely collaborate with business
| and civil society to shape the necessary global, regional and
| industrial transformations." -Klaus Schwab, "The Fourth
| Industrial Revolution", Page 70.
| RandomLensman wrote:
| It's not about control of advanced technologies but rather
| making such technologies look more powerful by pointing out
| real or imagined destructive capabilities. If you want
| funding or "big up" a topic, claim it has the power to bring
| down the world (anyone remembering gray goo?).
| seydor wrote:
| I think that s a hopeful message.
|
| At first we will see a lot of "prompt censoring mobs" that will
| try to "stop abusers because children and terrorists", but as
| the images multiply, and they will multiply, the line between
| real and fake photos will become hard to spot, for real. This
| is i think a pivotal and great moment, because everyone can now
| claim plausible deniability to any picture. No revenge porn
| will be believable anymore, nor will anyone know if that
| Bezos's weiner pic is real or not.
| gjsman-1000 wrote:
| I had a thought for a utopian/dystopian world.
|
| One day these image generators will also get video support...
| and pornography support. When that happens, a few things may
| occur that I think are reasonable to predict:
|
| EDIT: Original post was way too wordy, TL;DR:
|
| When AI-generated pornography becomes available, it could be
| likely that demand for "real" pornography disappears because
| the AI will match and surpass the "real." When that occurs, the
| "real" will become increasingly regulated and legally risky,
| and may just outright be as good as banned.
| suby wrote:
| There will be a huge and unkillable market for non-AI
| generated pornography, even if people cannot tell the
| difference in an AB test. The demand will be too strong and I
| don't think there will be much outcry to ban it if it's all
| consenting adults.
| scarmig wrote:
| If people can't tell the difference in an AB test, how will
| the real porn out compete the generated stuff? Porn
| distributors aren't known for their truth in advertising or
| care in sourcing of material. And even if they were, how
| would the porn distributors be able to source the real
| stuff, when anyone can create porn of anyone they imagine?
| You might say PKA will save us, but people aren't going to
| be typing out `gpg --verify` when their hands are otherwise
| occupied.
| tastemykungfu wrote:
| Curious as to how you arrived at that conclusion.
| derac wrote:
| A quick Google shows an estimate that it was a 100
| billion dollar market in 2017. Seems there is a lot of
| demand.
| thfuran wrote:
| But what portion of that demand is specifically for
| artisinally handcrafted authentic sex rather than for
| apparent sex acts?
| croes wrote:
| Perhaps we should have started with artificial wisdom instead
| of artificial intelligence
| matheusmoreira wrote:
| > you can fully expect your asshole friends to grab a dozen
| photos of you from Facebook and then make a hyperrealistic
| pornographic image of you with a gorilla
|
| ... Someone is gonna do this to children. This technology is
| gonna end up on the news. Maybe they'll even try to ban it.
| indymike wrote:
| Corollary: If you did not make it, someone else would have
| anyway.
| hahajk wrote:
| Like you mentioned near the end, all this has been possible
| with photoshop, with amateur level skill. Hollywood can CGI the
| entire Captain Marvel movie, so as far as state-level efforts
| go, AI can really only be an incremental improvement at best.
|
| I think this is all just trendy popular sentiment moralizing
| AI.
| cypress66 wrote:
| > Why bother asking people not to? I guess if it helps you
| sleep at night that you tried, I guess?
|
| They obviously are aware. They just put all that so they don't
| get "canceled". It's just virtue signaling and covering your
| ass.
| nootropicat wrote:
| This is great news for those that actually have sex with a
| gorilla, because now they can claim it's an ai photo. :)
|
| Kidding aside, I think this is actually good. Humans need
| ephemerality. We are never getting the full version of it back,
| but with photorealistic ai video and image creation some
| freedom returns. I think without it a society in which everyone
| has a camera all the time would mean absolute ossification of
| social norms. Right now it's very, very new - I mean multiple
| generations living with the current, or better (eg. recording
| eye implants) technology.
| ForrestN wrote:
| I wonder/hope if a downstream effect of this technological
| change will be the end of the idea of a "shameful" or
| "humiliating" image. We all have bodies, we all have sex, and I
| agree that soon images of ourselves being nude/having sex etc.
| will proliferate because they'll be generated instantly via
| Siri shortcut as part of casual banter.
|
| In a world where every celebrity is having sex with gorillas,
| doesn't such an image lose its charge? Will Norms and values
| around sex/body shaming change?
| Fauntleroy wrote:
| If we get to the point where we can't tell AI generated
| images from reality, I'm not sure "body shaming" will be on
| society's collective radar anymore.
| geysersam wrote:
| I bet someone said in 1830, "by the time we can send robots
| to Mars, body shaming will not be a thing anymore". For
| better and worse, that's not how we humans do things
| generally.
| ForrestN wrote:
| Why?
| TylerE wrote:
| Much bigger fish to fry.
|
| Think things like forged evidence in trials.
| telesilla wrote:
| It's the right time to get into the digital forensics
| business.
| 867-5309 wrote:
| ..and the Berlin gorilla nightscene
| mmmpop wrote:
| I second this.
| bwest87 wrote:
| Appropriately prioritizing problems has never really been
| society's strength...
| mmmpop wrote:
| That's because pesky things like "democracy" get in the
| way of getting shit done.
|
| Don't you think?
| BrainVirus wrote:
| _> In a world where every celebrity is having sex with
| gorillas, doesn't such an image lose its charge?_
|
| No, because the image itself doesn't matter. What matters is
| how much the public wants to hate someone. If t he public is
| primed, any remotely plausible incriminating image will do as
| an excuse.
|
| Fortunately, these images are actually far than what someone
| can cook up with Photoshop. Unfortunately, it's a part of a
| bigger trend where we get more and more tools to produce,
| manipulates and share information, while the tools to analyze
| and filter information are lagging by at least half a
| century.
| Swizec wrote:
| > In a world where every celebrity is having sex with
| gorillas, doesn't such an image lose its charge? Will Norms
| and values around sex/body shaming change?
|
| I hoped this would happen with social media. Everyone says
| stupid shit online and everyone has past beliefs they've
| outgrown. So what's the big deal?
|
| Instead we went the opposite way. Everyone is super self
| conscious and censoring at all times because you never know
| who's gonna take it out of context and make a big deal.
| seydor wrote:
| This is not the same though, it's about being able to deny
| that that photo of you is real, not that it's taken out of
| context.
| Swizec wrote:
| Well what's a "real" photo?
|
| If I take your photo and photoshop (very well) different
| surroundings. Is it a real photo if you?
|
| If I photoshop your face (very well) onto a different
| body. Is that real?
|
| If I feed your photos into a model that can create
| realistic versions of those photos in different poses or
| with different facial expressions. Is that real?
|
| They all start with something that is very definitely a
| real photo. You can't (yet? ever?) generate a realistic
| photo of a specific person from a textual description.
| The machinery needs a source.
| seydor wrote:
| A real photo is one created by photons outside the camera
| Swizec wrote:
| You'll be surprised to learn that doesn't work without
| some amazing tech to process the photons. Different
| settings will produce a different photo.
|
| Hell, just changing focal length makes a bigg difference
| to what your face looks like:
| https://imgur.io/gallery/ZKTWi no digital manipulation
| required.
|
| Which of those faces is "real"? They're all just
| recording photons hitting the camera, but look very
| different.
|
| It gets even worse when we start talking about colors.
| For example: it took cameras _decades_ before they could
| accurately capture black faces. Where accurately means
| "an average person would say it looks right"
|
| https://www.vox.com/2015/9/18/9348821/photography-race-
| bias
| kortex wrote:
| Simple: a "real" photo is one in which a light field from
| the real world impinges on a photosensitive media (CCD,
| film), and directly encodes that information, with some
| allowance for global light levels, gain, ISO, and speed.
| Anything else is a modification therein. HDR, multi-
| exposure compositing, etc, aren't truly "real". They may
| be 99% real, but aren't 100% real. If you _crop_ it, it
| 's 99.9% real (we have models which can detect cropping
| and even from which region it originated, obviously it
| can't reconstruct the missing data).
|
| Yes, by that definition, most photographs already aren't
| real.
| Swizec wrote:
| > Yes, by that definition, most photographs already
| aren't real.
|
| What if I use no digital manipulation at all, but play
| around with focal lengths or perspective to produce the
| desired effect. Is that a real photo?
|
| For example from covid reporting:
| https://twitter.com/baekdal/status/1254460167812415489
| simondotau wrote:
| So if I take a photo of a photoshopped photo, it is a
| real photo.
| taylorportman wrote:
| It is an interesting point [the hope that values will
| adapt to reflect typical mischievous patterns of social
| dynamics within various clusters].. I used to marvel
| during the dotcom era at the salivating delight of "the
| internet never forgets" of college students caught
| smoking a bong and the subsequent impact on their career
| (virtue signalling?).. I saw hope that society could then
| adapt about ridiculous FUD biases. There is a strange
| relationship of scarcity & opportunity and these windows
| into souls painting a distorted picture of the darker
| side of our ambitions. It is a cancer. Also, humans are
| always testing boundary conditions - curiosity,
| discontent, security,insecurity. Some inherent
| desperation of people jockeying for the next path from
| frying pan to fire in the search for greener pasture
| breeds opportunism that seems certain to favor
| desperation and negativity.
| tshaddox wrote:
| That seems unlikely, given that _making up hurtful stories
| about people and transmitting them via text or voice_ is
| still a thing. Everyone knows that anyone can make up any
| story they want without any technology whatsoever, and yet
| spreading rumors is still a thing.
| seydor wrote:
| Not really. I can't think of a recent "leaked texts" that
| the participants cannot not easily and plausibly deny (e.g.
| elon's supposed messages to gates), or even voice messages.
| Even most images can already be denied as photoshop if all
| the witnesses agree. The only medium that is somewhat hard
| to deny is videos, like sex tapes, but that's also not too
| hard. I think there will soon be a race to make deep
| learning pics look completely indistinguishable from phone
| pics.
| zone411 wrote:
| It'll be hard to deny crypto-signed photos
| https://petapixel.com/2022/08/08/sonys-forgery-proof-
| tech-ad..., especially if they include metadata that
| distinguish photos of AI generated images from normal
| photos.
| nerdponx wrote:
| Are journalists savvy or ethical enough to give a shit?
| What about the people reading/viewing/listening to the
| news?
| seydor wrote:
| any camera can be hacked to plant an image in its
| framebuffer
| zone411 wrote:
| Maybe at some point for some cameras. But not soon after
| their release if they took steps to protect their
| pipeline with hardware.
| dustyleary wrote:
| There are a few different kinds of 'secure enclaves'
| implemented on chips, where you can have some degree of
| trust that it "cannot" be faked.
|
| E.g. crypto wallets, hardware signing tokens, etc.
|
| We could imagine an imaging sensor chip made by a big-
| name company whose reputation matters, where the imaging
| sensor chip does the signing itself.
|
| So, Sony or Texas Instruments or Canon start
| manufacturing a CCD chip that crypto signs its output.
| And this chip "can't" be messed with in the same way that
| other crypto-signing hardware "can't" be messed with.
|
| That doesn't seem too far-fetched to me.
|
| * edit: As I think about it, I think more likely what
| happens is that e.g. Apple starts promising that any
| "iPhoneReality(tm)" image, which is digitally signed in a
| certain way, cannot have been faked and was certainly
| taken by the hardware that it 'promises' to be (e.g. the
| iPhone 25).
|
| Regardless of how they implement it at the hardware level
| to maintain this guarantee, it is going to be a major
| target for security researchers to create fake images
| that carry the signature.
|
| So, we will have some level of trust that the signature
| "works", because it is always being attacked by security
| researchers. Just like our crypto methods work today.
| There will be a cat-and-mouse game between manufacturers
| and researchers/hackers, and we'll probably know years in
| advance when a particular implementation is becoming
| "shaky".
| tshaddox wrote:
| Perhaps that's somewhat true for famous people, although
| there are plenty of examples of false stories (without
| any forged evidence, literally just stories) causing real
| embarrassment and damage to reputation.
|
| But it's even more true for non-famous people getting
| bullied in their social groups, both online and offline,
| and that's more what I was responding to (the "asshole
| friends" in the original comment).
| tgv wrote:
| You're right. They call this an "ethical release", but what
| ethics, I may ask. A profitable IPO is more likely to have been
| their consideration. As other researchers before them, they are
| willingly releasing something with the potential to do harm, or
| pave the way for it, washing their hands in innocence.
| choppaface wrote:
| Really, the licensing is most interesting? There's a lot of
| public info about training and development too.
|
| The license itself is pretty irrelevant. What people will
| actually do with the training blueprints, and how fast things
| will evolve.. now that's interesting.
| Geee wrote:
| I think the actual problem is that this gives plausible
| deniability against photographic evidence, which might result
| in increase of bad behavior. Even cameras which
| cryptographically sign their output can't prove that the input
| was actually photographed from the real world, or if it's just
| an image of an image.
| jl6 wrote:
| Can this be used to create Imagen/DALL-E levels of image quality
| on consumer GPUs?
| andybak wrote:
| For many classes of prompts - yes. It has less semantic
| awareness in some regards but it's in the right ballpark.
| toxik wrote:
| Takes 10 GiB but yes.
| mkaic wrote:
| This is one of the most important moments in all of art history.
| Millions of people just got unconditional access to the state-of-
| the-art in AI text-to-image for absolutely free less the cost of
| hardware. I have an Nvidia GPU myself and am thrilled beyond
| belief with the possibilities that this opens up.
|
| Am planning on doing some deep dives into latent-space
| exploration algorithms and hypernetworks in the coming days! This
| is so, so, so exciting. Maybe the most exciting single release in
| AI since the invention of the GAN.
|
| EDIT: I'm particularly interested in training a hypernetwork to
| translate natural language instructions into latent-space
| navigation instructions with the end goal of enabling me to give
| the model natural-language feedback on its generations. I've got
| some rough ideas but haven't totally mapped out my approach yet,
| if anyone can link me to similar projects I'd be very grateful.
| egypturnash wrote:
| It's the opposite of exciting if art is your job, lemme tell
| you.
| grandmczeb wrote:
| Depends on your perspective I guess. The visual artists I
| know are super excited about AI art.
| squeaky-clean wrote:
| I have a friend who works as an artist and he's excited and
| nervous about this. But also he's trying to learn how to use
| these well. If you try these AI out, there's definitely an
| art to writing good prompts that gets what you actually want.
| Hopefully these AI will become just another brush in the
| artist's palette rather than a replacement.
|
| I hope these end up similar to the relationship between
| Google and programming. We all know the jokes about "I don't
| really know how to code, I just know how to Google things".
| But using a search engine efficiently is a very real skill
| with a large gap between those who know how and those who
| don't.
| squeaky-clean wrote:
| Replying to myself because I just had a chat with him about
| this. He's thinking of getting a high end GPU now, lol.
|
| Some ideas of how this could be useful in the future to
| assist artists:
|
| Quickly fleshing out design mockups/concepts is the obvious
| first one that you can do right now.
|
| An AI stamp generator. Say you're working on a digital
| painting of a flower field. You click the AI menu, "Stamp",
| a textbox opens up and you type "Monarch butterfly. Facing
| viewer. Monet style painting." And you get a selection of
| ai generated images to stamp into your painting.
|
| Fill by AI. Sketch the details of a house, select the Fill
| tool, select AI, click inside one of the walls of the
| house, a textbox pops up, you write "pre-war New York brick
| wall with yellow spray painted graffiti"
| fithisux wrote:
| AI will just replace existing jobs. It is unethical since a
| lot of people will be destroyed but our leaders sponsored by
| the majority's insatiable appetite for power will soon make
| it legal.
|
| AI is a new tool that will automate away a lot of workers
| like other machines.
|
| What happens with these workers is what defines us.
| boredemployee wrote:
| And there are a lot of influencers saying that it will be
| "really nice that AI will replace the boring jobs so we can
| focus on creative/fulfilling life" yeah right...
| cercatrova wrote:
| Ironically, with Moravec's Paradox, (digital) creative
| tasks will probably be automated while the boring tasks
| of moving boxes around might not be for a while:
|
| > _Moravec 's paradox is the observation by artificial
| intelligence and robotics researchers that, contrary to
| traditional assumptions, reasoning requires very little
| computation, but sensorimotor and perception skills
| require enormous computational resources. The principle
| was articulated by Hans Moravec, Rodney Brooks, Marvin
| Minsky and others in the 1980s. Moravec wrote in 1988,
| "it is comparatively easy to make computers exhibit adult
| level performance on intelligence tests or playing
| checkers, and difficult or impossible to give them the
| skills of a one-year-old when it comes to perception and
| mobility"._
|
| https://en.wikipedia.org/wiki/Moravec%27s_paradox
| mkaic wrote:
| (OP here) I agree. I am an artist (not by trade but by
| lifelong obsession) in several different mediums, but also an
| AI engineer--so I feel a weird mixture of emotions tbh. I'm
| thrilled and excited but also terrified lol.
| egypturnash wrote:
| It's my fucking job. I've spent my whole fucking life
| getting good at drawing. I can probably manage to keep
| finding work but I am really not happy about what this is
| going to do to that place where a young artist is right at
| the threshold of being good enough to make it their job.
| Because once you're spending most of your days doing a
| thing, _you start getting better at it a lot faster_. And
| every place this shit gets used is a lost opportunity for
| an artist to get paid to do something.
|
| I wanna fucking punch everyone involved in this thing.
| cercatrova wrote:
| I mean I get what you're saying, sucks to have someone or
| something take your job, but isn't this a neo-luddite
| type argument? AI is gonna come for us all eventually.
| egypturnash wrote:
| Please save this comment and re-read it when a new
| development in AI suddenly goes from "this is cute" to
| "holy fuck my job feels obsolete and the motherfuckers
| doing it are not gonna give a single thought to all the
| people they're putting out of work". Thank you.
| cercatrova wrote:
| Look at that, you said the same thing to me 8 days ago
| [0]. I'll stick to the same rebuttals you got for that
| comment as well, namely that AI comes for us all, the
| only thing to be done is to adapt and survive, or perish.
| Like @yanderekko says, it is cowardice to assume we
| should make an exception for AI in our specific field of
| interest.
|
| [0] https://news.ycombinator.com/item?id=32462157
| visarga wrote:
| You got to move one step higher and work with ideas and
| concepts instead of brushes. AI can generate more imagery
| than was previously possible, so it's going to be about
| story telling or animation.
| egypturnash wrote:
| I make comics, it's already about storytelling and ideas
| as much as it is about drawing stuff. I make comics in
| part because _I like drawing shit_ and that gives me a
| framework to hang a lot of drawings on. I _like_ the set
| of levels I work at and don 't want to change it. I've
| spent an _entire fucking lifetime_ figuring out how to
| make my work something I enjoy and I sure bet nobody
| involved in this is gonna fling a single cent in the
| direction of the artists they 're perpetrating massive
| borderline copyright infringement upon.
|
| But here's all these motherfuckers trying to automate me
| out of a job. It's not even a boring, miserable job. It's
| a job that people dream of having since they were kids
| who really liked to draw. Fuck 'em.
| redavni wrote:
| Eh...the countdown to Photoshop and Blender integrating
| AI support has already started.
| egypturnash wrote:
| I wanna punch everyone involved in that, too.
|
| Admittedly as someone who's been subscribing to Creative
| Cloud for a while I already wanna punch a lot of people
| at Adobe so the people working on this particular part of
| Photoshop are gonna have to get in line.
| boredemployee wrote:
| Yep. I'm glad that didn't hit other spaces, like music, _yet_
| zone411 wrote:
| Check out the melodies I made with an AI assistant I
| created (human-in-the-loop still but much quicker than if I
| tried to come up with them from the scratch): https://www.y
| outube.com/playlist?list=PLoCzMRqh5SkFwkumE578Y.... There
| are also good AI tools for other parts of making music,
| like singing voice generation.
| fezfight wrote:
| Kinda like the creation of copilot and it's ilk for
| programmers, and gpt3 for writers. Ive seen some talk
| recently around 'prompt engineers'... Probably to some
| extent, every job will become prompt engineering in some way.
|
| Eventually I suppose the AIs will also do the prompts.
|
| At which point I hope we've all agreed to a star trek utopia,
| or it's gonna get real bad. Or maybe it'll get way better.
| egypturnash wrote:
| Yeah, if we're gonna replace every fucking profession with
| a half-assed good-enough AI version, what're we even here
| for? We're sure not all gonna survive in a capitalist
| society where you have to create some kind of "value" to
| earn enough money to pay for your roof and your food and
| your power.
|
| IIRC there is some vague "it sure got real bad" somewhere
| in the Trek timelines between "capitalism ended" and "post-
| scarcity utopia" and I sure am not looking forwards to
| living through those times. Well, I'm looking forwards to
| part of that, I'm looking forwards to the part where we
| murder a lot of landlords and rent-seekers and CEOs and
| distribute their wealth. That'll be good.
| bottlepalm wrote:
| Next let's get rid of all the artists and replace them
| with AI. Redistribute their skills so we can all make
| art. Oh wait that just happened. You're a hypocrite that
| wants to redistribute the wealth of others, but not your
| own.
|
| Also joking about murdering people is bad taste and not
| how you convey a point or win an argument. Very low
| class.
| ALittleLight wrote:
| I think right now we could setup the AI's to do the
| prompts. You type in a vague description - "gorilla in a
| suit" and that is passed to GPT-3's API with instructions
| to provide a detailed and vivid description of the input in
| X style where X is one of several different styles. GPT-3
| generates multiple prompts, the prompts passed to Stable
| Diffusion, the user gets back a grid of different images
| and styles. Selecting an image on the results grid prompts
| for variation, possibly of the prompt and the images.
| tailspin2019 wrote:
| > state-of-the-art in AI image-to-text
|
| I think you meant text-to-image!
| mkaic wrote:
| oh heck, you're right! I edited the original comment, thanks.
| [deleted]
| bilsbie wrote:
| > particularly interested in training a hypernetwork to
| translate natural language instructions into latent-space
| navigation instructions with the end goal of enabling me to
| give the model natural-language feedback on its generations.
|
| What are you doing exactly?
| DougBTX wrote:
| Imagine every conceivable image is laid out on the ground,
| images which are similar to each other are closer together.
| You're looking at an image of a face. Some nearby images
| might be happier, sadder, with different hair or eye colours,
| every possibility in every combination all around it. There
| are a lot of images, so it is hard to know where to look if
| you want something specific, even if it is nearby. They're
| going to write software to point you in the right direction,
| by describing what you want in text.
|
| Here's an example of this sort of manipulation:
| https://arxiv.org/abs/2102.01187
| haneefmubarak wrote:
| AFAICT: making a navigation/direction model that can
| translate phrase-based directions into actual map-based
| directions, with the caveat that the model would be updated
| primarily by giving it feedback the same way that you would
| give a person feedback.
|
| Sounds only a couple of steps removed from basically needing
| AGI?
| space_fountain wrote:
| I suspect you'd want to start by trying to translate
| differences between images into descriptive differences.
| Maybe you could generate examples by symbolic manipulation
| to generate pairs of images or maybe nlp can let us find
| differences between pairs of captions? Large nlp models
| already feel pretty magical to me and encompass things that
| we would have said required AGI until recently so it seems
| possible, though really tough
| daenz wrote:
| >This is one of the most important moments in all of art
| history.
|
| I agree, but not for the reasons you imply. It will force real
| artists to differentiate themselves from AI, since the line is
| now sufficiently blurred. It's probably the death of an era of
| digital art as we know it.
| gjsman-1000 wrote:
| Maybe this will signal a return to "real" non-digital artwork
| and methods...
| quadcore wrote:
| Wait here, this shit hasnt hit France yet :)
|
| Those models are trained on artists work and put those same
| artists out of work. When people will register I dont think
| this is gonna fly.
| ben_w wrote:
| > Wait here, this shit hasnt hit France yet.
|
| What is special about France in this case?
| vintermann wrote:
| Labour protections/willingness to strike, I suspect they
| mean. But I don't buy it. I've seen far too many people
| who "should" be worried about this technology instead be
| absolutely in love with it.
|
| Case in point, the comic artist Jens K (who, f.d, I
| support on Patreon):
| https://twitter.com/jenskstyve/status/1560360657148682242
| [deleted]
| telesilla wrote:
| Did we say the same in the 80s when audio sampling became a
| thing? We accepted it (after obligatory legal battles) and
| moved on, giving rise to the Creative Commons.
| wwwtyro wrote:
| Yeah, the fact that these models are necessarily based on
| existing works leaves me hopeful that humans will remain the
| leaders in this space for the time being.
| soulofmischief wrote:
| Human works are needed to create the initial datasets, but
| an increasing amount of models use generative feedback
| loops to create more training data. This layer can easily
| introduce novel styles and concepts without further human
| input.
|
| The time is coming where we will need to, as patrons,
| reevaluate our relationships with art. I fear art is
| returning to a patronage model, at least for now, as
| certainly an industry which already massively exploits
| digital artists will be more than happy to replace them
| with 25% worse performing AI for 100% less cost.
| BeFlatXIII wrote:
| For those who are lucky enough to make it, I foresee
| patronage as being much more stable than making art to
| sell to the masses/corporate ad contracts.
| quadcore wrote:
| The generated pictures that are posted in the blog post
| are superior than the average artist work. Which isnt
| surprising, AI corrects "human mistake" (e.g.
| composition, colors, contrasts, lines, etc) easily.
| archagon wrote:
| Why would people want to consume art that says nothing
| and means nothing? While this technology is fascinating,
| it produces the visual equivalent of muzak, and will
| continue to do so in perpetuity without the ability to
| reason.
| jmfldn wrote:
| That's the problem for me too. This tech is cool for
| games, stock images etc but for actual art it's pretty
| meaningless. The artist's experience, biography and
| relationship with the world and how that feeds into their
| work is the WHOLE point for me. I want to engage, via any
| real artistic product, with a lived human experience.
| Human consciousness in other words.
|
| To me this technology is very clever but it's meaningless
| as far as real art goes, or it's a sideshow at best.
| Perhaps best case, it can augment a human artistic
| practice.
| visarga wrote:
| It's easy to generate believable backstory. A large LM
| comes to write the bio of the "artist" and even a few
| press interviews. If you like you can chat with it, and
| of course request art to specification. You can even keep
| it as a digital companion. So you can extend the realism
| of the game as much as you like.
| ben_w wrote:
| How much is really being said by the highest dollar-
| valued modern music?
|
| https://youtu.be/oOlDewpCfZQ and
| https://youtu.be/L2cfxv8Pq-Q come to mind for different
| reasons.
| robocat wrote:
| Can photography be good art? Is Marcel Duchamp (found
| object) art? Can good art be discovered almost
| serendipitously, or can good art only be created by
| slowly learning and applying a skill?
|
| I think art is mostly about perception and selection, by
| the viewer. There are others that think art is more about
| the crafting process by the artist. How do you tell the
| difference between an artist and a craftsperson?
|
| One way I categorise artists I have met is engineer-type
| artists versus discovery-type artists:
| https://news.ycombinator.com/item?id=31981875
|
| Disclaimer: I am engineer.
| scarmig wrote:
| We can tell the difference between muzak and "real
| music"; we just dislike the muzak. But the real risk and
| likelihood is that we get to the point that AI will be
| generating art that is indistinguishable from human-
| generated art, and it's only muzak if someone subscribes
| to the idea that the content of art is less relevant than
| its provenance. Some people will, particularly rich
| people who use art as a status signifier/money laundering
| vehicle, but mass media artists will struggle to find
| buyers among the less discerning mass audience.
| buildartefact wrote:
| Marvel Studios makes billions of dollars every year
| archagon wrote:
| And I'm fairly confident in saying that AI will never be
| able to generate a Marvel movie! (Not in our lifetimes,
| anyway.)
| BobbyJo wrote:
| Humans are trained on other humans' work as well though. Is
| there a type of ideological or aesthetic exploration that
| can't be expressed as part of an AI model?
| derac wrote:
| Making art is already not making much money for the vast
| majority of producers (outside of 3d modeling). There really
| aren't very many jobs in making art. I'd reckon most people
| are artists because they love doing it.
| rvz wrote:
| Great news of the public release, just look at the melting pot of
| creativity and innovation already being posted here: [0] Much
| better than DALLE-2.
|
| Well done to the Stable Diffusion team for this release!
|
| [0] https://old.reddit.com/r/StableDiffusion/
| mmastrac wrote:
| How hard would this be to re-train to more domain-specific
| images? For example, if I wanted to teach it more about specific
| birds, cars or plane models?
| serf wrote:
| this one was incredibly easy to trick into producing uncensored
| nudes.
|
| which. personally.. I think is great.. but to each their own
|
| (NSFW!!!) this ween does not exist :
| https://i.ibb.co/D7qJ7HC/23456532.png
| amrrs wrote:
| They have a content filter classifier at the top wondering how
| this escaped that.
| GaggiX wrote:
| You can disable it
| contravariant wrote:
| Don't tell it to make stuff in the style of Junji Ito.
| bluejellybean wrote:
| While neat, and no doubt impressive, it still utterly fails on
| prompts that should be completely reasonable to any sane human
| being/artist.
|
| Take something like "A cat dancing atop a cow, with utters that
| are made out of ar-15s that shoot lazer-beam confetti". A vivid
| description should be aroused in your head, and no doubt, I could
| imagine an artist have a lot of fun creating such a
| description... Alas, what the model spits out is pure unusable
| garbage.
| topynate wrote:
| The referent of "utters" (sic) is ambiguous, so I can imagine a
| model having more difficulty with it than usual. Regardless,
| the current SOTA does need more specific and sometimes
| repetitive prompting than a human artist would, but it's
| surprising how much better results you can get from SOTA models
| with a bit of experience at prompt engineering.
| bluejellybean wrote:
| This is, in part, what I'm trying to point out, it's an
| obvious typo given the context, and something that you or I
| would be able to pick up on, yet it completely breaks (it
| spit out a bunch of weird confetti cats for me). Perhaps I'm
| being a little harsh, but if it requires word-perfect tuning
| and prompt engineering, it speaks to something about the
| 'stupidity' of these models. It's a neat trick, but to call
| it anything in the realm of artificial intelligence is a bit
| of a joke.
| TulliusCicero wrote:
| More complex/weirder prompts aren't going to work yet, no.
|
| What will probably happen with these models is that for more
| advanced stuff, you may using the "inpainting" that Dall-E
| already has going, where you can sort of mix and match and
| combine images. That way you could have the cat, for example,
| rendered separately, thereby simplifying each individual
| prompt.
| h2odragon wrote:
| ITYM "udders"
|
| also try "teats"
| naillo wrote:
| Fucking awesome
| mysterydip wrote:
| Really interesting. I wonder if at some point it would be
| possible to optimize a network for size and speed by focusing on
| a specific genre, like impressionist or only pixel art. I like
| that I can get an image in any style I want, but that has to
| increase the workload substantially.
| badsectoracula wrote:
| Is there any way to download this on my PC and run it offline?
| Something like a command-line tool like $
| ./something "cow flying in space" > cow-in-space.png
|
| that runs with local-only data (i.e. no internet access, no DRM,
| no weird API keys, etc like pretty much every AI-related
| application i've seen recently) would be neat.
| cercatrova wrote:
| Yes, clone the repo (https://github.com/CompVis/stable-
| diffusion), download the weights and follow the readme for
| setting up a conda environment. I am presently doing so on my
| RTX 3080.
| isoprophlex wrote:
| I can't believe i sold my rtx2070 last month, aaargh...!
| digitallyfree wrote:
| As an aside I wonder how performance would be like running this
| on CPU (with the current GPU shortage this might well be a
| worthwhile choice). Even something like 30 minutes to generate
| an image on a multicore CPU would greatly increase the number
| of people able to freely play with this model.
| drexlspivey wrote:
| Yes but you need a GPU with 10Gb of vRAM
| GaggiX wrote:
| Yes you can
| https://github.com/huggingface/diffusers/releases/tag/v0.2.3
| (probably the easiest way)
| timmg wrote:
| Garr:
|
| > And log in on your machine using the huggingface-cli login
| command.
|
| I find that annoying. I guess it is what it is.
| mkaic wrote:
| Yes, that's actually the biggest reason this is such a cool
| announcement! You just need to download the model checkpoints
| from HuggingFace[0] and follow the instructions on their Github
| repo[1] and and you should be good to go. You basically just
| need to clone the repo, set up a conda environment, and make
| the weights available to the scripts they provide.
|
| [0] https://huggingface.co/CompVis/stable-diffusion [1]
| https://github.com/CompVis/stable-diffusion
|
| Good luck!
| vintermann wrote:
| You need a decent GPU, though. I suspect my 6080MiB won't cut
| it any longer :(
| miohtama wrote:
| Is Apple M1 support expected soon? Because even if Apple's
| chips are slower, they have plenty of RAM on laptops. I saw
| some weeks ago it was coming, but I am not sure where to
| follow the process.
| neurostimulant wrote:
| You're going to need at least 10GB VRAM. My SFF pc with 4GB
| VRAM can only run dalle mini / craiyon :(
| mempko wrote:
| Not if you change the precision to float16. Should work
| on a smaller card. Tried on a 1080 with 8GB and it works
| well.
| krisoft wrote:
| How would one do that?
|
| -----
|
| Sorry my bad, found the answer. One simply adds the
| following flags to the
| StableDiffusionPipeline.from_pretrained call in the
| example: revision="fp16", torch_dtype=torch.float16
|
| Found it in this blogpost:
| https://huggingface.co/blog/stable_diffusion
|
| mempko thank you for your hint! I was about to drop a not
| insignificant amount of money on a new GPU.
|
| What does one lose by using float16 representation? Does
| it make the images visually less detailed? Or how can one
| reason about this?
| [deleted]
| ompto wrote:
| There's a version that's a bit slower but more memory
| efficient https://github.com/basujindal/stable-diffusion
| that runs on 6GB too.
| naillo wrote:
| This should be possible if someone just exported them to tflite
| or onnxruntime etc (quantization could help a ton too). Not
| sure why ppl haven't yet. Sure it'll come in the next few days
| (I might do it).
| [deleted]
| jsmith45 wrote:
| I _think_ the answer is yes, but setup is a bit complicated. I
| would test this myself, but I don 't have an NVIDIA card with
| at least 10GB of VRAM.
|
| One time:
|
| 1. Have "conda" installed.
|
| 2. clone https://github.com/CompVis/stable-diffusion
|
| 3. `conda env create -f environment.yaml`
|
| 4. activate the Venv with `conda activate ldm`
|
| 5. Download weights from https://huggingface.co/CompVis/stable-
| diffusion-v-1-4-origin... (requires registration).
|
| 6. `mkdir -p models/ldm/stable-diffusion-v1/`
|
| 7. `ln -s <path/to/model.ckpt> models/ldm/stable-
| diffusion-v1/model.ckpt`. (you can download the other version
| of the model, like v1-1, v1-2, and v1-3 and symlink them
| instead if you prefer).
|
| To run:
|
| 1. activate venv with `conda activate ldm` (unless still in a
| prompt running inside the venv).
|
| 2. `python scripts/txt2img.py --prompt "a photograph of an
| astronaut riding a horse" --plms`.
|
| Also there is a safety filter in the code that will black out
| NSFW or otherwise expected to be offensive images (presumably
| also including things like swastikas, gore, etc). It is trivial
| to disable by editing the source if you want.
| cypress66 wrote:
| I haven't gotten around it, but I remember reading on /g/
| that you can make it run on 5GB (sacrificing accuracy).
|
| You should check their threads there, there's some good info.
| entrep wrote:
| Thanks for these instructions.
|
| Unfortunately I'm getting this error message (Win11, 3080
| 10GB):
|
| > RuntimeError: CUDA out of memory. Tried to allocate 3.00
| GiB (GPU 0; 10.00 GiB total capacity; 5.62 GiB already
| allocated; 1.80 GiB free; 5.74 GiB reserved in total by
| PyTorch) If reserved memory is >> allocated memory try
| setting max_split_size_mb to avoid fragmentation. See
| documentation for Memory Management and
| PYTORCH_CUDA_ALLOC_CON
|
| Edit:
|
| >>> from GPUtil import showUtilization as gpu_usage
|
| >>> gpu_usage()
|
| | ID | GPU | MEM |
|
| ------------------
|
| | 0 | 1% | 6% |
|
| Edit 2:
|
| Got this optimized fork to work:
| https://github.com/basujindal/stable-diffusion
| orpheansodality wrote:
| I also have a 10g card and saw the same thing - to get it
| working I had to pass in "--n_samples 1" to the command,
| which limits the number of generated images to 2 in any
| given run. This has been working fine for me
| squeaky-clean wrote:
| There's some additional discussion on running it locally here
|
| https://old.reddit.com/r/StableDiffusion/comments/wuyu2u/how...
| Traubenfuchs wrote:
| So is there a simple way to do this online? I have no dedicated
| gpu and won't buy one just for this. We'd pay.
| scoopertrooper wrote:
| https://huggingface.co/spaces/stabilityai/stable-diffusion
|
| Here you go.
| mromanuk wrote:
| the queue to make images was 1 now is 5
| marc_io wrote:
| https://beta.dreamstudio.ai/
| panabee wrote:
| hotpot.ai should offer stable diffusion later today and
| available via API as well.
|
| we're also building an open-source version of imagen, if anyone
| likes working on this kind of applied ML (need ML + design
| help).
| fpgaminer wrote:
| Played with it for a bit in DreamStudio so I could control more
| of the settings. So far everything it generates is "high
| quality", but the AI seems to lack the creativity and breadth of
| understanding that DALL-E 2 has. OpenAI's model is better at
| taking wildly differing concepts and figuring out creative ways
| to glue them together, even if the end result isn't perfect.
| Stable Diffusion is very resistant to that, and errs towards the
| goal of making a high quality image. If it doesn't understand the
| prompt, it'll pick and choose what parts of the prompt are
| easiest for it and generate fantastic looking results for those.
| Which is both good and bad.
|
| For example, I asked it in various ways for a bison dressed as an
| astronaut. The results varied from just photos of astronauts, to
| bisons on earth, to bisons on the moon. The bison was always
| drawn hyper realistically, which is cool, but none of them were
| dressed as an astronaut. DALLE on the other hand will try all
| kinds of different ways that a bison might be portrayed as an
| astronaut. Some realistic, some more imaginative. All of them
| generally trying to fulfill the prompt. But many results will be
| crude and imperfect.
|
| I personally find DALLE to be more satisfying to play with right
| now, because of that creativity. I'm not necessarily looking for
| the highest quality results. I just want interesting results that
| follow my prompt. (And no, SD's Scale knob didn't seem to help
| me). But there's also a place for SD's style if you just want
| really great looking, but generic stuff.
|
| That said, the current version of SD was explicitly finetuned on
| an "aesthetically" ranked dataset. So these results aren't really
| surprising. I'm sure the next generations of SD will start
| knocking DALLE out of the park in both metrics. And, of course,
| massive massive props to Stability.ai for releasing this
| incredible work as open source. Imagine all the tinkering and
| evolving people are going to do on top of this work. It's going
| to be incredible.
| GenericPoster wrote:
| Interesting. I took a stab at your prompt and SD really
| struggles. It just completely ignores part of the prompt. Even
| craiyon puts in an effort to at least complete the entire
| prompt.
|
| The bison is very realistic at least. So maybe the future is
| different models that have different specialties.
|
| Edit: managed to get this one after a few more tries
| https://imgur.com/a/3061n5d
| ralfd wrote:
| I read that you have to give longer prompts to Stable
| diffusion.
|
| This is my bison astronaut:
|
| https://i.imgur.com/ohIuG6F.png
|
| The prompt was:
|
| "A bison as an astronaut, tone mapped, shiny, intricate,
| cinematic lighting, highly detailed, digital painting,
| artstation, concept art, smooth, sharp focus, illustration, art
| by terry moore and greg rutkowski and alphonse mucha"
| naillo wrote:
| Img2img and inpainting is usually a lot easier to get cool
| results (with a lot of control on your end) with in my view.
| Tenoke wrote:
| I'm getting a 403 on the Colab (while successfully logging in and
| providing a huggingface token). Is it already disabled? Do you
| have to pay huggingface to download the model? It's unclear from
| the Colab and post where the issue is.
| punkspider wrote:
| The solution seems to be to visit
| https://huggingface.co/CompVis/stable-diffusion-v1-4 and check
| a checkbox and click the button to confirm access.
|
| The full error I also got was: HTTPError: 403
| Client Error: Forbidden for url:
| https://huggingface.co/api/models/CompVis/stable-
| diffusion-v1-4/revision/fp16
|
| I visited https://huggingface.co/api/models/CompVis/stable-
| diffusion-v... and saw {"error":"Access to
| model CompVis/stable-diffusion-v1-4 is restricted and you are
| not in the authorized list. Visit
| https://huggingface.co/CompVis/stable-diffusion-v1-4 to ask for
| access."}
|
| Eventually I focused and realized I need to visit that URL to
| solve the issue. Hope this helps.
| johnsimer wrote:
| yeah visit the url and click the checkbox and Accept
| hunkins wrote:
| This release changes society forever. Free and open access to
| generate a hyper-realistic image via just a text prompt is more
| powerful than I think we can imagine currently.
|
| Art, media, politics, conspiracy theories; all of it changes with
| this.
| gjsman-1000 wrote:
| Eh... if I was making conspiracy theories, it's not like
| Photoshop hasn't existed for decades already, with far more
| predictable results.
| johnfn wrote:
| Photoshop requires hours of work from a skilled professional
| to create results of decent quality. Now anyone can do it for
| free, virtually instantaneously.
| seydor wrote:
| This is like a gazillion photoshops being released in the
| wild. Things change with scale, and there is a threshold
| where, if enough people start doubting often enough, then all
| the people will doubt all the time
| TulliusCicero wrote:
| Photoshop requires skill. This mostly doesn't.
| [deleted]
| jcims wrote:
| >it's not like Photoshop hasn't existed for decades already,
| with far more predictable results.
|
| Agreed. For me those results are predictably shit. Every
| time.
| realce wrote:
| Photoshop requires experience and some talent, this doesn't.
| If I was some small rebel group in Africa or the Middle East
| with basically no money or training, I'd use this tool every
| single day until I was in power, or I'd frame my opposition
| as using it against the People.
|
| Everyone just got their own KGB art department.
| roguas wrote:
| Try and do that. Likely those people won't care too much
| about you, they lean towards authority figures in their
| community. It is way easier to find those guys and corrupt
| them. Rather than running some underground news agency
| changing minds of millions of people.
|
| In fact quite often the former is the case + cheaper + less
| time to execute. "Western" societies will be more resilient
| to this scenario. So, mostly its gonna be a lot of
| "political" art we gonna see.
| vagabund wrote:
| Longterm, multimodal generative models will be society-
| altering. But right now this is just a really cool toy.
| rvz wrote:
| Yes. This changes everything.
|
| Until this point, you _really_ cannot believe any image you see
| on the internet anymore.
| kertoip_1 wrote:
| Ahh, this technology moves so fast I'm not even able to even keep
| up with reading about it. Not to mention trying it myself.
| aaroninsf wrote:
| Price check for A6000 GPU: $4500 USD
|
| Hmmm.
| andybak wrote:
| It's running fine on my gaming laptop.
| zone411 wrote:
| You don't need an A6000. You can run it on consumer GPUs.
| Kiro wrote:
| So how do I set up a server to generate images for me? An API I
| can post anything to and get an image back.
| derac wrote:
| You'll need to know how to program. Try Python with Flask or
| Bottle.
| Kiro wrote:
| Very funny.
|
| Anyway, this pretty much answered my question:
| https://news.ycombinator.com/item?id=32556277
| derac wrote:
| Sorry I misinterpreted what you meant, I wasn't meaning to
| be glib.
| TeeMassive wrote:
| That explain Open-AI's price reduction email that I got this
| morning.
| throwaway-jim wrote:
| will anyone think about the illustrators? Who will pay them when
| amateurs can generate 9 good enough images from a text prompt?
| MaxikCZ wrote:
| And that terrible time a plow was discovered. So many people
| with shovels lost their job.
| simonw wrote:
| If you want to try it out this seems to be the best tool for
| doing so: https://beta.dreamstudio.ai/
| criddell wrote:
| If I'm willing to buy a computer, any pointers on what I would
| have to buy? I'm asking for specific models from a company like
| Dell, Apple, or Lenovo?
| Sohcahtoa82 wrote:
| You'll need a GPU. One with a LOT of RAM, like an RTX 3090,
| which has 24 GB.
| msoucy wrote:
| According to this post, it needs 6.9 Gb. So the 3070,
| 3070-Ti, 3080, etc. can all run it. Sadly, my RTX 2060 is
| below that limit...
| smoldesu wrote:
| Apparently the model decompresses, and it won't fit very
| well on the 8gb models... I'm willing to give the max
| settings a spin on my 3070ti, but I'm not very hopeful.
| criddell wrote:
| Would a Mac Pro with a Radeon Pro W5700X with 16 GB of
| GDDR6 memory work?
| barbecue_sauce wrote:
| It says that NVIDIA chips are recommended but that they
| are working on optimizations for AMD. This implies to me
| that it probably involves CUDA stuff and getting it to
| run on a Radeon would be potentially difficult (I am not
| an expert on the current state of CUDA to AMD
| compatibility, though).
| Rzor wrote:
| AMD's answer to CUDA is called ROCm. I've been doing a
| little research on it since a few weeks ago and it seems
| to be funky when not outright broken. It's absolutely
| maddening that after all this time AMD doesn't have
| proper tooling on consumer GPUs.
| mattkevan wrote:
| They're also working on M1/M2 support.
| derac wrote:
| Wait for the 4070, it should be around the perf of a
| 3080/maybe 4090 for ~500 bucks if rumors hold up. It is
| coming in a few months. NZXT used to make good pre-builts.
| Not sure which others have a good rep. DO NOT BUY DELL.
| criddell wrote:
| What's wrong with Dell? Is Lenovo a bad option as well?
| Does NZXT offer on-site service?
| derac wrote:
| You aren't getting on-site service with a consumer
| product. There are plenty of 3rd party people who can
| service your computer, though. It's like Lego.
|
| Dell uses crappy proprietary tech, poor quality
| components, and they have an all around bad reputation.
|
| NZXT uses good components and they make some of the best
| cases you can buy.
|
| I don't know much about Lenovo's desktop products.
|
| You might try posting on reddit.com/r/suggestapc and ask
| about the best service contracts and high quality system
| integrators.
|
| Edit: that particular reddit looks pretty dead actually.
| The big one is r/buildapcsales , you can take a look at
| their side bar or discord and ask around.
|
| One more thing, GamersNexus on YouTube does reviews of
| pre builts and they are the best at this sort of thing.
| Their community is likely very helpful as well.
|
| https://youtube.com/playlist?list=PLsuVSmND84QuM2HKzG7ipb
| IbE...
|
| The biggest issue with most pre-builts is terrible
| airflow making the expensive components throttle. The
| (Dell) Alienwares are some of the worst for this.
| hedora wrote:
| I'd rent a cloud vm with a beefy gpu from paperspace or a
| similar company. It'll run about $20 per month for casual
| use.
| mattkevan wrote:
| Go for Google Colab Pro - about PS12 per month - or Pro+ at
| PS45 per month for A100 and V100 Gpus.
| jmfldn wrote:
| I just tried it via this link. I'm not sure what I'm looking at
| here but the results were extremely underwhelming. I've used
| Dall E 2 and Midjourney extensively so I know what they're
| capable of. Maybe I'm missing something?
| derac wrote:
| This Twitter thread has a lot of great comparisons of the
| models with different prompts.
|
| https://twitter.com/fabianstelzer/status/1561019215754280963
| akvadrako wrote:
| I've only used it via discord, but it's much better than
| Midjourney and sometimes better than Dall-E. So maybe that
| site isn't the same thing or you need to work on your
| prompts.
| jcims wrote:
| I haven't seen anything beat midjourney for creating
| atmosphere.
|
| DALL-E is great at making 'things' and generally good/great
| at faces.
| andybak wrote:
| Every model I've used initially seemed poor compared to the
| one I was just using. It takes time to figure out their sweet
| spot and what kind of prompts they excel at.
|
| I've had a lot of great results from SD - but _different_
| great results to Dall-E.
| alcover wrote:
| Moral question:
|
| Such tool _will_ be used to generate lewd imagery involving
| virtual minors.
|
| No way to prevent it upstream by outlawing feeding it real
| content (whose possession already is illegal). Suffice to add
| 'childrenize' layer onto adult NN or something.
|
| How will the legal system react ? Bundle it into illegal imagery,
| period ? Maybe it's already the case - I think drawing made
| public is, not sure. If not, on what ground could it be ? No real
| minor would be involved in that production.
| ta_99 wrote:
| naillo wrote:
| If it is made illegal it'll probably be applied to the sites
| that distribute stuff like that, not this model itself.
| miohtama wrote:
| Thoughtcrime is not a thing in the US, do not know about other
| countries
|
| https://en.wikipedia.org/wiki/Thoughtcrime
|
| Though Think of the children folk have often tried to make it
| illegal.
| tomtom1337 wrote:
| > The final memory usage on release of the model should be 6.9 Gb
| of VRAM.
|
| Do they actually mean GB or Gb? Can anyone confirm?
| coolspot wrote:
| Gigabytes not gigabits for sure
| soperj wrote:
| This is honestly the best one so far for the things I'm looking
| to do. For weird prompts though it sometimes produces images that
| just look blurry.
| gjsman-1000 wrote:
| DALL-E 2 just got smoked. Anyone with a graphics card isn't going
| to pay to generate images, or have their prompts blocked because
| of the overly aggressive anti-abuse filter, or have to put up
| with the DALL-E 2 "signature" in the corner. It makes me wonder
| how OpenAI is going to work around this because this makes DALL-E
| 2 a very uncompetitive proposition. Except, of course, for people
| without graphics cards, but it's not 2020 anymore.
| at_a_remove wrote:
| I have been considering building a "modern" computer and now
| wonder exactly what I need to load this puppy up.
| TulliusCicero wrote:
| People are saying you need a GPU with 6.9GB of RAM for the
| current model, so in practice at least an 8GB GPU.
|
| Thankfully, GPU prices have finally calmed down and you can
| get one for a reasonable price. I think any of the RTX 3000
| series desktop GPU's should do it, for example.
| Baeocystin wrote:
| DALL-E's filters are so harsh that I find myself often in the
| situation where I don't even understand how what I prompted
| could possibly be in violation.
|
| It's a novel feeling, but utterly stifling when it comes to
| actual creativity, and I'm not even trying to push any NSFW
| boundaries, just explore the artspace. Once I can run
| unfiltered on my own GPU, DALL-E will never get used by me
| again.
| vintermann wrote:
| The usual modern Google experience: they won't tell you what
| you did wrong.
| derac wrote:
| Openai isn't a Alphabet/google company
| throwaway675309 wrote:
| Midjourney also completely destroys DALL-e from a price
| perspective, effectively allowing nearly unlimited generation
| for approximately $50 a month.
|
| Even though DALL-E tends to be better at following prompt
| details, you're inhibited from being able to explore the space
| freely because of how prohibitively expensive it can become.
| polygamous_bat wrote:
| This, however, is unconditionally good for the end users. I
| expect OpenAI to lower their prices significantly quite soon.
| morsch wrote:
| In fact, they announced lower prices today, going into effect
| in September.
|
| https://openai.com/api/pricing/
| aaronharnly wrote:
| That's for GPT-3 text generation, not the DALL-E 2 image
| generator. Hopefully that will get pricing revised down
| (and an official API) before long.
| morsch wrote:
| Oh, I see! Thanks, I should have read the mail more
| closely.
| TOMDM wrote:
| Been playing with this for the past hour now on my RTX 2070. Each
| image takes about 10 seconds.
|
| Results vary wildly but it can make some really great stuff
| occasionally. It's just infrequent enough to keep you going "one
| more prompt".
|
| Super addicting.
|
| Lookingforward to people implementing inpainting and all the
| stuff that lets you do.
| jupp0r wrote:
| "Use Restrictions
|
| You agree not to use the Model or Derivatives of the Model:
|
| - In any way that violates any applicable national, federal,
| state, local or international law or regulation;
|
| - For the purpose of exploiting, harming or attempting to exploit
| or harm minors in any way;
|
| - To generate or disseminate verifiably false information and/or
| content with the purpose of harming others;
|
| - To generate or disseminate personal identifiable information
| that can be used to harm an individual;
|
| - To defame, disparage or otherwise harass others;
|
| - For fully automated decision making that adversely impacts an
| individual's legal rights or otherwise creates or modifies a
| binding, enforceable obligation;
|
| - For any use intended to or which has the effect of
| discriminating against or harming individuals or groups based on
| online or offline social behavior or known or predicted personal
| or personality characteristics;
|
| - To exploit any of the vulnerabilities of a specific group of
| persons based on their age, social, physical or mental
| characteristics, in order to materially distort the behavior of a
| person pertaining to that group in a manner that causes or is
| likely to cause that person or another person physical or
| psychological harm;
|
| - For any use intended to or which has the effect of
| discriminating against individuals or groups based on legally
| protected characteristics or categories;
|
| - To provide medical advice and medical results interpretation;
|
| - To generate or disseminate information for the purpose to be
| used for administration of justice, law enforcement, immigration
| or asylum processes, such as predicting an individual will commit
| fraud/crime commitment (e.g. by text profiling, drawing causal
| relationships between assertions made in documents,
| indiscriminate and arbitrarily-targeted use)."
|
| The last point seems to be the only thing that's not illegal, all
| other restrictions seem to be covered under "you are not allowed
| to break laws", which is somewhat redundant.
| dang wrote:
| Recent and related:
|
| _Stable Diffusion launch announcement_ -
| https://news.ycombinator.com/item?id=32414811 - Aug 2022 (39
| comments)
| ChildOfChaos wrote:
| So what do I need to run this and how?
| andybak wrote:
| Anything close to a decent modern gaming PC will do it fine.
| I'm running on a laptop 3080 and I can generate 768x512 in
| about 20 seconds (with a 30 second overhead per batch)
| swagmoney1606 wrote:
| superdisk wrote:
| Just tried it, the results seem pretty poor, about on par with
| Craiyon/DALL-E Mini. I don't think OpenAI should be worried quite
| yet.
| i_like_apis wrote:
| No it's all about the prompt. It takes some getting used to,
| you have to look at a bunch of prompts from experienced users.
|
| SD is waaay better than Craiyon. And better than Dalle2.
|
| Check out r/StableDiffusion
| minimaxir wrote:
| It depends on the domain. Artsy will do better with Stable
| Diffusion, but realistic/coherent output with Stable Diffusion
| is harder to do especially compared to DALL-E 2.
| andybak wrote:
| With all due respect, I've been using it for over a week and I
| don't think you've given it a fair shot.
|
| There's plenty of cases it's worse than Dall-E and there's
| plenty of cases where it's better. Overall it seems to show
| less semantic understanding but it handles many stylistic
| suggestions much better. It's definitely in the right ballpark.
|
| In fact I'm still using a wide range of models - many of which
| aren't regarded as "state of the art" any more - but they have
| qualities that are unique and often desireable.
| treis wrote:
| Can you give an example? I've done:
|
| A house painted blue with a white porch
|
| A dreamy shot of an alpaca playing lacrosse
|
| A red car parked in a driveway
|
| The last one was particularly crappy. It gave me a red house
| with a driveway, but no car. And the house wasn't even really
| a house. It superficially looked like one but was actually
| two garages put together.
| andybak wrote:
| Here's some random prompts I've had nice results from:
| iridescent metal retro robot made out of simple geometric
| shapes. tilt shift photography. award winning
| Scene in a creepy graveyard from Samurai Jack by Genndy
| Tartakovsky and Eyvind Earle virus bacteria
| microbe by haeckel fairytale magic realism steampunk
| mysterious vivid colors by andy kehoe amanda clarke
| etching of an anthropomorphic factory machine in the style
| of boris artzybasheff origami low polygon
| black pug forest digital art hyper realistic a
| tilt shift photo of a creepy doll Tri-X 400 TX by gerhard
| richter
|
| I guess I might have spent more time reading guides on
| "prompt engineering" than you. ;-) I think maybe Dall-E is
| more forgiving of "vanilla prompts".
|
| However I do get nice results from simpler prompts as well.
| I just tend to use this style of prompt more often than
| not.
| mattkevan wrote:
| Agreed. I still primarily use vqgan + clip, which is nowhere
| near state of the art, but produces really interesting
| results. I've spent a long time learning to get the best out
| of it, and while the results aren't very coherent, it's great
| at colour, texture, materials and lighting.
| lxe wrote:
| The adjustable safety classifier makes this release leapfrog
| above the competition imho.
| [deleted]
| bilsbie wrote:
| Is there a way to give it an image to manipulate as part of the
| prompt?
| neurostimulant wrote:
| Maybe try the img2img script?
| https://github.com/CompVis/stable-diffusion/blob/main/README...
| davesque wrote:
| Pretty cool. Although it's interesting that it can't seem to
| render an image from a precise description that should have
| something like an objectively correct answer. I tried prompts
| like "Middle C engraved on a staff with a treble clef" and "An
| SN74LS173 integrated circuit chip on a breadboard" both of which
| came back with images that were nowhere close to something I'd
| call accurate. I don't mean to detract from the impressiveness of
| this work. But I wanted a sense of how much of a "threat" this
| tech is to jobs or to skills that we normally think of as being
| human. Based on what I'm seeing, I'd say it's still got a ways to
| go before it's going to destroy any jobs. In its current form, it
| mostly seems like a fun way to generate logos or images where the
| exact details of the content don't matter.
| nerdponx wrote:
| I am generally of the "it's not threatening yet and won't be
| for a while" camp, but in this _particular_ case it 's probably
| just for lack of trying. These algorithms are essentially
| enormous pattern-matching engines, so given enough data and
| some task-specific engineering effort, I wouldn't be surprised
| if you could build an "AI" circuit designer, like Copilot but
| for electronics instead of code.
|
| Next-level autorouting would be cool, but it's still not going
| to put the electrical engineering field out of business.
| cercatrova wrote:
| I've been looking forward to this. The license however strikes me
| as too aspirational, and it may be hard to enforce legally:
|
| > You agree not to use the Model or Derivatives of the Model:
|
| > - In any way that violates any applicable national, federal,
| state, local or international law or regulation;
|
| > - For the purpose of exploiting, harming or attempting to
| exploit or harm minors in any way;
|
| > - To generate or disseminate verifiably false information
| and/or content with the purpose of harming others;
|
| > - To generate or disseminate personal identifiable information
| that can be used to harm an individual;
|
| > - To defame, disparage or otherwise harass others;
|
| > - For fully automated decision making that adversely impacts an
| individual's legal rights or otherwise creates or modifies a
| binding, enforceable obligation;
|
| > - For any use intended to or which has the effect of
| discriminating against or harming individuals or groups based on
| online or offline social behavior or known or predicted personal
| or personality characteristics;
|
| > - To exploit any of the vulnerabilities of a specific group of
| persons based on their age, social, physical or mental
| characteristics, in order to materially distort the behavior of a
| person pertaining to that group in a manner that causes or is
| likely to cause that person or another person physical or
| psychological harm;
|
| > - For any use intended to or which has the effect of
| discriminating against individuals or groups based on legally
| protected characteristics or categories;
|
| > - To provide medical advice and medical results interpretation;
|
| > - To generate or disseminate information for the purpose to be
| used for administration of justice, law enforcement, immigration
| or asylum processes, such as predicting an individual will commit
| fraud/crime commitment (e.g. by text profiling, drawing causal
| relationships between assertions made in documents,
| indiscriminate and arbitrarily-targeted use).
|
| How can you prove some of these in a court of law?
| lacker wrote:
| The license does seem impossibly vague and broad. Usually what
| happens when software projects use custom & demanding licenses
| like this is that large companies refuse to allow the software
| to be used because of the legal uncertainty, small companies
| just use it and ignore the licensing constraints, and there are
| never any lawsuits that clarify anything one way or another. If
| that's fine with the authors of the project, they can just
| leave the license vague and unclear forever.
| alexb_ wrote:
| Make sure your art isn't the wrong type of artistic!
| jahewson wrote:
| I dunno. I can imagine any of those points being the subject of
| a civil suit and for someone to win damages, for e.g.
| psychological harm. The parts talking about "effect" instead of
| intent are of questionable enforceability - how can I agree not
| to cause an unanticipated effect on a third party? I cannot.
| But having said that, I can be asked to account for effects
| that a "reasonable person" would anticipate, so there's that.
|
| These are all things that someone could sue over (especially in
| California) and so they're wanting to place the responsibility
| on the artist and not their tools.
| [deleted]
| fluidcruft wrote:
| It seems like the sort of things you would require so that
| HuggingFace doesn't get roped in as defendants in lawsuits
| related to things that others do with the code. So if for
| example someone builds something that generates medical advice
| and gets sued for violating FDA requirements or damages or
| whatever then HuggingFace can say that was not something they
| allowed in the first place.
| laverya wrote:
| Yeah it really reads like "we know people are going to do
| nasty things with this; we can't prevent that; please don't
| sue us over it" to me
| hedora wrote:
| They really should have used blanket liability waiver text
| and left it at that.
|
| I'm sure someone will find a way to sue them anyway. It
| doesn't even call out using this to create derivative works
| to avoid paying original authors copyright fees.
|
| On top of that, their logo is an obvious rip off of a Van
| Gogh. It seems clear they're actively encouraging people to
| create similar works that infringe active copyrights. They
| should ask Kim Dotcom how that worked out for him.
| ben_w wrote:
| > On top of that, their logo is an obvious rip off of a
| Van Gogh. It seems clear they're actively encouraging
| people to create similar works that infringe active
| copyrights.
|
| I don't think Van Gogh's works are under copyright any
| more. At least not directly, recent photos of them may be
| but that's the photos not the paintings that have a
| copyright.
| cercatrova wrote:
| Indeed, it seems like a legal "wink wink, nudge nudge" sort
| of thing. Well, as long as I can run stuff on my own GPU,
| I'm satisfied.
| digitallyfree wrote:
| I'm actually seeing these types of conditions becoming more
| common in software EULAs as well, as an boilerplate addon to
| the usual copyright notices and legal disclaimers. Don't have
| examples off the top of my head but I've seen clauses that
| this application may not be used for the enablement of
| violence, for discriminatory purposes, and so forth. It
| really is a CYA sort of thing.
| hedora wrote:
| Each of the "for any use ... which has the effect of ..."
| clauses probably bars any wide distribution of the output of
| this model.
|
| Trivially: People have phobias of literally everything.
|
| They ban using it to "exploit" minors, presumably that prevents
| any incorporation of it into any for-profit educational
| curriculum. After all, they do not define "exploit", and
| profiting off of a group without direct consent seems like a
| reasonable interpretation.
|
| I am not a lawyer, but I wouldn't dream of using this for
| commercial use with a license like this. This definitely
| doesn't meet the bar for incorporation into Free or Open Source
| Software.
| nullc wrote:
| That license from top to bottom is distilled "tell me you don't
| know anything about art without telling me you don't know
| anything about art".
|
| Interesting art challenges its audience. But even the most
| boring art will still offend some-- it's the nature of art that
| the viewer brings their own interpretation, and some people
| bring an offensive one.
| digitaLandscape wrote:
| TigeriusKirk wrote:
| I dunno. All I see is necessary ass covering. They're just
| saying anyone who does these objectionable things did so
| against our terms.
| nullc wrote:
| CYA doesn't necessitate creating a cause of action against
| the users for engaging in what otherwise would be a legally
| protected act of free expression. One can disclaim without
| creating liability.
| ambivdexterous wrote:
| /ic/ is having daily meltdowns over this. I don't think the
| internet at large is doing better, because even professional
| concept artists are dialing it in now. Holy hell.
| gjsman-1000 wrote:
| What's /ic/?
| alexb_ wrote:
| https://4channel.org/ic/
| ambivdexterous wrote:
| 4chan's Art Critique board
| kache_ wrote:
| I kept on warning them
|
| They ignored me
|
| Now however..
| howon92 wrote:
| Huge kudos for stability.ai
| faizshah wrote:
| What's the license on the images produced by this model?
| coolspot wrote:
| CC0 - no one owns the copyright, so everyone is free to use
| kazinator wrote:
| That is not so; the CC0 explicitly states that patent and
| trademark rights are not waived.
|
| Contrast that with, say, the Two-Clause BSD which says
| "[r]edistribution and use in source and binary forms, with or
| without modification, are permitted provided that the
| following conditions are met [...]".
|
| Since trademark and patent rights are not mentioned, then
| these words mean that even if the purveyor of the software
| holds patents and/or trademarks, your redistribution and use
| are permitted. I.e. it appears that a patent and trademark
| grant is implied if a patent holder puts something under the
| two-clause BSD. Or at least you have a ghost of a chance to
| argue it in court.
|
| Not so with the CC0, which spells out that you don't have
| permission to use any patents and trademarks in the work.
| api wrote:
| "This release is the culmination of many hours of collective
| effort to create a single file that compresses the visual
| information of humanity into a few gigabytes."
|
| If something like this is possible, does this mean there's
| actually far less _meaningful_ information out there than we
| think?
|
| Could you in fact pack virtually all meaningful information ever
| gathered by humanity onto a 1TiB or smaller hard drive? Obviously
| this would be lossy, but how lossy?
| MaxikCZ wrote:
| You can pack virtually all meaningful information ever gathered
| by humanity onto a single bit, but its gonna be lossy. And what
| is your definition of "meaningful information" anyway.
| Meaningful today might not be meanigful yesterday. Nobody cares
| about spin of each electron in my brain today, but in 4
| centuries my descendants will be like "if only we had that
| information, we could simulate our great-...-great parent
| today"
___________________________________________________________________
(page generated 2022-08-22 23:00 UTC)