https://github.com/deep-floyd/IF Skip to content Toggle navigation Sign up * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Case Studies + Customer Stories + Resources * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing [ ] * # In this repository All GitHub | Jump to | * No suggested jump to results * # In this repository All GitHub | Jump to | * # In this organization All GitHub | Jump to | * # In this repository All GitHub | Jump to | Sign in Sign up {{ message }} deep-floyd / IF Public * Notifications * Fork 16 * Star 383 License Unknown, Unknown licenses found Licenses found Unknown LICENSE Unknown LICENSE-MODEL 383 stars 16 forks Star Notifications * Code * Issues 5 * Pull requests 0 * Actions * Projects 0 * Security * Insights More * Code * Issues * Pull requests * Actions * Projects * Security * Insights deep-floyd/IF This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags Name already in use A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch? Cancel Create 4 branches 0 tags Code * Local * Codespaces * Clone HTTPS GitHub CLI [https://github.com/d] Use Git or checkout with SVN using the web URL. [gh repo clone deep-f] Work fast with our official CLI. Learn more. * Open with GitHub Desktop * Download ZIP Sign In Required Please sign in to use Codespaces. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching Xcode If nothing happens, download Xcode and try again. Launching Visual Studio Code Your codespace will open once ready. There was a problem preparing your codespace, please try again. Latest commit @ivksu ivksu Merge pull request #21 from deep-floyd/ivksu-patch-1 ... c012dc4 Apr 26, 2023 Merge pull request #21 from deep-floyd/ivksu-patch-1 Update README.md c012dc4 Git stats * 20 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time deepfloyd_if release April 26, 2023 20:25 notebooks release April 26, 2023 20:25 pics Delete if_architecture.jpg April 26, 2023 19:33 .gitattributes release April 26, 2023 20:25 .gitignore release April 26, 2023 20:25 .pre-commit-config.yaml release April 26, 2023 20:25 CHANGELOG.md release April 26, 2023 20:25 LICENSE release April 26, 2023 20:25 LICENSE-MODEL release April 26, 2023 20:25 README.md Update README.md April 26, 2023 22:33 requirements-dev.txt release April 26, 2023 20:25 requirements-test.txt release April 26, 2023 20:25 requirements.txt release April 26, 2023 20:25 setup.cfg release April 26, 2023 20:25 setup.py release April 26, 2023 20:25 View code [ ] DeepFloyd IF by DeepFloyd, StabilityAI Minimum requirements to use all IF models: Quick Start Local notebook and UI demo Integration with Diffusers Example Run the code locally Loading the models into VRAM I. Dream II. Zero-shot Image-to-Image Translation III. Super Resolution IV. Zero-shot Inpainting Model Zoo Original Quantitative Evaluation License Limitations and Biases DeepFloyd IF creators: Research Paper (Soon) Acknowledgements External Contributors README.md License License Downloads DeepFloyd IF by DeepFloyd, StabilityAI [nabla] We introduce DeepFloyd IF, a novel state-of-the-art open-source text-to-image model with a high degree of photorealism and language understanding. DeepFloyd IF is a modular composed of a frozen text encoder and three cascaded pixel diffusion modules: a base model that generates 64x64 px image based on text prompt and two super-resolution models, each designed to generate images of increasing resolution: 256x256 px and 1024x1024 px. All stages of the model utilize a frozen text encoder based on the T5 transformer to extract text embeddings, which are then fed into a UNet architecture enhanced with cross-attention and attention pooling. The result is a highly efficient model that outperforms current state-of-the-art models, achieving a zero-shot FID score of 6.66 on the COCO dataset. Our work underscores the potential of larger UNet architectures in the first stage of cascaded diffusion models and depicts a promising future for text-to-image synthesis. [deepfloyd_if_scheme] Inspired by Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding Minimum requirements to use all IF models: * 16GB vRAM for IF-I-XL (4.3B text to 64x64 base module) & IF-II-L (1.2B to 256x256 upscaler module) * 24GB vRAM for IF-I-XL (4.3B text to 64x64 base module) & IF-II-L (1.2B to 256x256 upscaler module) & Stable x4 (to 1024x1024 upscaler) * xformers and set env variable FORCE_MEM_EFFICIENT_ATTN=1 Quick Start Open In Colab Hugging Face Spaces pip install deepfloyd_if==1.0.0 pip install xformers==0.0.16 pip install git+https://github.com/openai/CLIP.git --no-deps Local notebook and UI demo The Dream, Style Transfer, Super Resolution or Inpainting modes are avaliable as in a Jupyter Notebook at IF/notebooks/ pipes-DeepFloyd-IF.ipynb. Integration with Diffusers IF is also integrated with the Hugging Face Diffusers library. Diffusers runs each stage individually allowing the user to customize the image generation process as well as allowing to inspect intermediate results easily. Example Before you can use IF, you need to accept its usage conditions. To do so: 1. Make sure to have a Hugging Face account and be loggin in 2. Accept the license on the model card of DeepFloyd/IF-I-IF-v1.0 3. Make sure to login locally. Install huggingface_hub pip install huggingface_hub --upgrade run the login function in a Python shell from huggingface_hub import login login() and enter your Hugging Face Hub access token. Next we install diffusers and dependencies: pip install diffusers accelerate transformers safetensors And we can now run the model locally. By default diffusers makes use of model cpu offloading to run the whole IF pipeline with as little as 14 GB of VRAM. If you are using torch>=2.0.0, make sure to delete all enable_xformers_memory_efficient_attention() functions. from diffusers import DiffusionPipeline from diffusers.utils import pt_to_pil import torch # stage 1 stage_1 = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-IF-v1.0", variant="fp16", torch_dtype=torch.float16) stage_1.enable_xformers_memory_efficient_attention() # remove line if torch.__version__ >= 2.0.0 stage_1.enable_model_cpu_offload() # stage 2 stage_2 = DiffusionPipeline.from_pretrained( "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 ) stage_2.enable_xformers_memory_efficient_attention() # remove line if torch.__version__ >= 2.0.0 stage_2.enable_model_cpu_offload() # stage 3 safety_modules = {"feature_extractor": stage_1.feature_extractor, "safety_checker": stage_1.safety_checker, "watermarker": stage_1.watermarker} stage_3 = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-x4-upscaler", **safety_modules, torch_dtype=torch.float16) stage_3.enable_xformers_memory_efficient_attention() # remove line if torch.__version__ >= 2.0.0 stage_3.enable_model_cpu_offload() prompt = 'a photo of a kangaroo wearing an orange hoodie and blue sunglasses standing in front of the eiffel tower holding a sign that says "very deep learning"' # text embeds prompt_embeds, negative_embeds = stage_1.encode_prompt(prompt) generator = torch.manual_seed(0) # stage 1 image = stage_1(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt").images pt_to_pil(image)[0].save("./if_stage_I.png") # stage 2 image = stage_2( image=image, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_embeds, generator=generator, output_type="pt" ).images pt_to_pil(image)[0].save("./if_stage_II.png") # stage 3 image = stage_3(prompt=prompt, image=image, generator=generator, noise_level=100).images image[0].save("./if_stage_III.png") There are multiple ways to speed up the inference time and lower the memory consumption even more with diffusers. To do so, please have a look at the Diffusers docs: * Optimizing for inference time * [?][?] Optimizing for low memory during inference For more in-detail information about how to use IF, please have a look at the IF blog post . Run the code locally Loading the models into VRAM from deepfloyd_if.modules import IFStageI, IFStageII, StableStageIII from deepfloyd_if.modules.t5 import T5Embedder device = 'cuda:0' if_I = IFStageI('IF-I-IF-v1.0', device=device) if_II = IFStageII('IF-II-L-v1.0', device=device) if_III = StableStageIII('stable-diffusion-x4-upscaler', device=device) t5 = T5Embedder(device="cpu") I. Dream Dream is the text-to-image mode of the IF model from deepfloyd_if.pipelines import dream prompt = 'ultra close-up color photo portrait of rainbow owl with deer horns in the woods' count = 4 result = dream( t5=t5, if_I=if_I, if_II=if_II, if_III=if_III, prompt=[prompt]*count, seed=42, if_I_kwargs={ "guidance_scale": 7.0, "sample_timestep_respacing": "smart100", }, if_II_kwargs={ "guidance_scale": 4.0, "sample_timestep_respacing": "smart50", }, if_III_kwargs={ "guidance_scale": 9.0, "noise_level": 20, "sample_timestep_respacing": "75", }, ) if_III.show(result['III'], size=14) [dream-III] II. Zero-shot Image-to-Image Translation [img_to_img] In Style Transfer mode, the output of your prompt comes out at the style of the support_pil_img from deepfloyd_if.pipelines import style_transfer result = style_transfer( t5=t5, if_I=if_I, if_II=if_II, support_pil_img=raw_pil_image, style_prompt=[ 'in style of professional origami', 'in style of oil art, Tate modern', 'in style of plastic building bricks', 'in style of classic anime from 1990', ], seed=42, if_I_kwargs={ "guidance_scale": 10.0, "sample_timestep_respacing": "10,10,10,10,10,10,10,10,0,0", 'support_noise_less_qsample_steps': 5, }, if_II_kwargs={ "guidance_scale": 4.0, "sample_timestep_respacing": 'smart50', "support_noise_less_qsample_steps": 5, }, ) if_I.show(result['II'], 1, 20) Alternative Text III. Super Resolution For super-resolution, users can run IF-II and IF-III on an image that was not necessarely generated by IF 96px --> 1024px (two cascades): from deepfloyd_if.pipelines import super_resolution middle_res = super_resolution( t5, if_III=if_II, prompt=['face of beautiful woman, makeup, detailed picture, 4k dslr, best quality'], support_pil_img=raw_pil_image, img_scale=4.0, img_size=96, if_III_kwargs={ 'sample_timestep_respacing': 'smart100', 'aug_level': 0.25, 'guidance_scale': 4.0, }, ) high_res = super_resolution( t5, if_III=if_III, prompt=[''], support_pil_img=middle_res['III'][0], img_scale=1024/384, img_size=384, if_III_kwargs={ "guidance_scale": 9.0, "noise_level": 20, "sample_timestep_respacing": "75", }, ) show_superres(raw_pil_image, high_res['III'][0]) 384px --> 1024px with aspect-ratio: from deepfloyd_if.pipelines import super_resolution _res = super_resolution( t5, if_III=if_III, prompt=['cat, detailed picture, 4k dslr'], support_pil_img=raw_pil_image, img_scale=1024/384, img_size=384, if_III_kwargs={ "guidance_scale": 9.0, "noise_level": 20, "sample_timestep_respacing": "75", }, ) show_superres(raw_pil_image, _res['III'][0]) [super-res-] IV. Zero-shot Inpainting from deepfloyd_if.pipelines import inpainting result = inpainting( t5=t5, if_I=if_I, if_II=if_II, if_III=if_III, support_pil_img=raw_pil_image, inpainting_mask=inpainting_mask, prompt=[ 'oil art, a man in a hat', ], seed=42, if_I_kwargs={ "guidance_scale": 7.0, "sample_timestep_respacing": "10,10,10,10,10,0,0,0,0,0", 'support_noise_less_qsample_steps': 0, }, if_II_kwargs={ "guidance_scale": 4.0, 'aug_level': 0.0, "sample_timestep_respacing": '100', }, if_III_kwargs={ "guidance_scale": 9.0, "noise_level": 20, "sample_timestep_respacing": "75", }, ) if_I.show(result['I'], 2, 3) if_I.show(result['II'], 2, 6) if_I.show(result['III'], 2, 14) [deep_floyd] Model Zoo The link to download the weights as well as the model cards will be available soon on each model of the model zoo Original Name Cascade Params FID Batch size Steps IF-I-M I 400M 8.86 3072 2.5M IF-I-L I 900M 8.06 3200 3.0M IF-I-XL* I 4.3B 6.66 3072 2.42M IF-II-M II 450M - 1536 2.5M IF-II-L* II 1.2B - 1536 2.5M IF-III-L* (soon) III 700M - 3072 1.25M *best modules Quantitative Evaluation FID = 6.66 [fid30k_if] License The code in this repository is released under the bespoke license (see added point two). The weights will be available soon via the DeepFloyd organization at Hugging Face and have their own LICENSE. Limitations and Biases The models available in this codebase have known limitations and biases. Please refer to the model card for more information. DeepFloyd IF creators: * Alex Shonenkov * Misha Konstantinov * Daria Bakshandaeva * Christoph Schuhmann * Ksenia Ivanova * Nadiia Klokova Research Paper (Soon) Acknowledgements Special thanks to StabilityAI and its CEO Emad Mostaque for invaluable support, providing GPU compute and infrastructure to train the models (our gratitude goes to Richard Vencu); thanks to LAION and Christoph Schuhmann in particular for contribution to the project and well-prepared datasets; thanks to Huggingface teams for optimizing models' speed and memory consumption during inference, creating demos and giving cool advice! External Contributors * The Biggest Thanks @Apolinario, for ideas, consultations, help and support on all stages to make IF available in open-source; for writing a lot of documentation and instructions; for creating a friendly atmosphere in difficult moments ; * Thanks, @patrickvonplaten, for improving loading time of unet models by 80%; for integration Stable-Diffusion-x4 as native pipeline ; * Thanks, @williamberman and @patrickvonplaten for diffusers integration ; * Thanks, @hysts and @Apolinario for creating the best gradio demo with IF ; * Thanks, @Dango233, for adaptation IF with xformers memory efficient attention ; About No description, website, or topics provided. Resources Readme License Unknown, Unknown licenses found Licenses found Unknown LICENSE Unknown LICENSE-MODEL Stars 383 stars Watchers 13 watching Forks 16 forks Report repository Releases No releases published Packages 0 No packages published Contributors 8 * @shonenkov * @zeroshot-ai * @Gugutse * @patrickvonplaten * @apolinario * @ivksu * @williamberman * @estability Languages * Python 99.9% * Jupyter Notebook 0.1% Footer (c) 2023 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.