https://github.com/KwaiVGI/LivePortrait Skip to content Navigation Menu Toggle navigation Sign in * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + GitHub Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions By size + Enterprise + Teams + Startups By industry + Healthcare + Financial services + Manufacturing By use case + CI/CD & Automation + DevOps + DevSecOps * Resources Resources + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Enterprise + Enterprise platform AI-powered developer platform Available add-ons + Advanced Security Enterprise-grade security features + GitHub Copilot Enterprise-grade AI features + Premium Support Enterprise-grade 24/7 support * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} KwaiVGI / LivePortrait Public * Notifications You must be signed in to change notification settings * Fork 125 * Star 1.7k Make one portrait alive! liveportrait.github.io License MIT license 1.7k stars 125 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings * Code * Issues 33 * Pull requests 2 * Actions * Projects 0 * Security * Insights Additional navigation options * Code * Issues * Pull requests * Actions * Projects * Security * Insights KwaiVGI/LivePortrait This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main BranchesTags Go to file Code Folders and files Name Name Last commit Last commit message date Latest commit History 18 Commits .vscode .vscode assets assets pretrained_weights pretrained_weights src src .gitignore .gitignore LICENSE LICENSE app.py app.py inference.py inference.py readme.md readme.md requirements.txt requirements.txt speed.py speed.py video2template.py video2template.py View all files Repository files navigation * README * MIT license LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control Jianzhu Guo^ 1+ Dingyun Zhang^ 1,2 Xiaoqiang Liu^ 1 Zhizhou Zhong^ 1,3 Yuan Zhang^ 1 Pengfei Wan^ 1 Di Zhang^ 1 ^1 Kuaishou Technology ^2 University of Science and Technology of China ^3 Fudan University [6874747073] [6874747073] [6874747073] showcase For more results, visit our homepage Updates * 2024/07/04: We released the initial version of the inference code and models. Continuous updates, stay tuned! * 2024/07/04: We released the homepage and technical report on arXiv. Introduction This repo, named LivePortrait, contains the official PyTorch implementation of our paper LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control. We are actively updating and improving this repository. If you find any bugs or have suggestions, welcome to raise issues or submit pull requests (PR) . Getting Started 1. Clone the code and prepare the environment git clone https://github.com/KwaiVGI/LivePortrait cd LivePortrait # create env using conda conda create -n LivePortrait python==3.9.18 conda activate LivePortrait # install dependencies with pip pip install -r requirements.txt 2. Download pretrained weights Download our pretrained LivePortrait weights and face detection models of InsightFace from Google Drive or Baidu Yun. We have packed all weights in one directory . Unzip and place them in ./ pretrained_weights ensuring the directory structure is as follows: pretrained_weights +-- insightface | +-- models | +-- buffalo_l | +-- 2d106det.onnx | +-- det_10g.onnx +-- liveportrait +-- base_models | +-- appearance_feature_extractor.pth | +-- motion_extractor.pth | +-- spade_generator.pth | +-- warping_module.pth +-- landmark.onnx +-- retargeting_models +-- stitching_retargeting_module.pth 3. Inference python inference.py If the script runs successfully, you will get an output mp4 file named animations/s6--d0_concat.mp4. This file includes the following results: driving video, input image, and generated result. image Or, you can change the input by specifying the -s and -d arguments: python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 # or disable pasting back python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback # more options to see python inference.py -h More interesting results can be found in our Homepage 4. Gradio interface We also provide a Gradio interface for a better experience, just run by: python app.py 5. Inference speed evaluation We have also provided a script to evaluate the inference speed of each module: python speed.py Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with torch.compile: Model Parameters Model Size Inference (M) (MB) (ms) Appearance Feature Extractor 0.84 3.3 0.82 Motion Extractor 28.12 108 0.84 Spade Generator 55.37 212 7.59 Warping Module 45.53 174 5.21 Stitching and Retargeting 0.23 2.3 0.31 Modules Note: the listed values of Stitching and Retargeting Modules represent the combined parameter counts and the total sequential inference time of three MLP networks. Acknowledgements We would like to thank the contributors of FOMM, Open Facevid2vid, SPADE, InsightFace repositories, for their open research and contributions. Citation If you find LivePortrait useful for your research, welcome to this repo and cite our work using the following BibTeX: @article{guo2024live, title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control}, author = {Jianzhu Guo and Dingyun Zhang and Xiaoqiang Liu and Zhizhou Zhong and Yuan Zhang and Pengfei Wan and Di Zhang}, year = {2024}, journal = {arXiv preprint:2407.03168}, } About Make one portrait alive! liveportrait.github.io Resources Readme License MIT license Activity Custom properties Stars 1.7k stars Watchers 32 watching Forks 125 forks Report repository Releases No releases published Packages 0 No packages published Contributors 2 * @cleardusk cleardusk Jianzhu Guo * @zzzweakman zzzweakman ZhizhouZhong Languages * Python 100.0% Footer (c) 2024 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact * Manage cookies * Do not share my personal information You can't perform that action at this time.