[HN Gopher] LivePortrait: A fast, controllable portrait animatio...
       ___________________________________________________________________
        
       LivePortrait: A fast, controllable portrait animation model
        
       We are excited to announce the release of our video-driven portrait
       animation model! This model can vividly animate a single portrait,
       achieving a generation speed of 12.8ms on an RTX 4090 GPU with
       `torch.compile` from PyTorch. And, we are actively updating and
       improving this repo!  Related Resources: - Homepage:
       https://liveportrait.github.io  - Paper:
       https://arxiv.org/abs/2407.03168  - Code:
       https://github.com/KwaiVGI/LivePortrait  - Jupyter:
       https://github.com/camenduru/LivePortrait-jupyter  - ComfyUI:
       https://github.com/kijai/ComfyUI-LivePortraitKJ and
       https://github.com/shadowcz007/comfyui-liveportrait  We hope you
       give it a try and enjoy!
        
       Author : cleardusk
       Score  : 73 points
       Date   : 2024-07-04 18:02 UTC (3 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | nestorD wrote:
       | The "generalization to animals" part seems like it opens a lot of
       | interesting avenues!
        
         | homarp wrote:
         | did you mamage to make it work? the model can't find my cat
         | face
        
           | homarp wrote:
           | never mind. https://github.com/KwaiVGI/LivePortrait/issues/20
           | #issuecomme... explains it needs custom fine tuning to work
        
       | smusamashah wrote:
       | This is amazing. I can immediately see it being used by
       | StableDiffusion and other generative image communities. It gives
       | life to those lifeless faces and it doesn't look outstandingly
       | odd. Not to my eyes at least.
       | 
       | Edit: it's definitely being used already
       | https://www.reddit.com/r/StableDiffusion/comments/1dvepjx/li...
        
         | vergessenmir wrote:
         | It will allow for more realistic emotions in current SD Model
         | merges and fine tunes by generating frames correctly labelled
         | with their associated emotions.
         | 
         | Most SD1.x/SDXL models images depict humans with the same
         | expression so the frames generated by LivePortrait will help
         | with training datasets.
         | 
         | I believe the Pixar animators in Toy Story 1 used facial
         | expressions /emotions database called F.A.C.S to make the
         | characters more humanly relatable.
         | 
         | It's not clear if the "expressions" will generalise to new
         | faces
        
       | column wrote:
       | There's a typo right at the beginning of your paper's page:
       | exsiting
        
         | cleardusk wrote:
         | Fixed! h_h
        
       | 42lux wrote:
       | For everyone wanting to use this commercially be wary of the
       | insightface models licensing...
        
         | cleardusk wrote:
         | https://github.com/deepinsight/insightface?tab=readme-ov-fil...
         | ---------- The code of InsightFace is released under the MIT
         | License. There is no limitation for both academic and
         | commercial usage.
        
           | homarp wrote:
           | that is the code. the weights are non commercial
           | 
           | Both manual-downloading models from our github repo and auto-
           | downloading models with our python-library follow the above
           | license policy(which is for non-commercial research purposes
           | only).
        
             | cleardusk wrote:
             | Understood. The core dependency of InsightFace in
             | LivePortrait is the face detection algo. The face detection
             | is easily to be replaced with self-developed or MIT-
             | licensed model.
        
               | homarp wrote:
               | alt models https://paperswithcode.com/task/face-detection
        
               | cchance wrote:
               | Exactly just replace it with any segmentation model lol,
               | FastSAM or a YOLO model can find the face lol. No reason
               | to be using insight for that.
        
       | brcmthrowaway wrote:
       | Is this how the Luma dream machine works?
        
         | baobabKoodaa wrote:
         | no
        
           | brcmthrowaway wrote:
           | How does that work
        
       | vessenes wrote:
       | This is .. remarkably fast. Fast as in a quick response to
       | Microsoft's announcement earlier this year, and as in low
       | latency. I love it.
       | 
       | I'd love to see a database of facial expression videos that's
       | used for some sort of standardized test expression testing.. are
       | you guys aware of one?
        
       | jokethrowaway wrote:
       | - Fast! - Getting some unstable results, the head keeps moving up
       | and down, just a few pixel, maybe it needs some stabilization -
       | Single frame rendering are quite good, a bit cartoony though - No
       | lip syncing - Head rotation is bad, it deforms the head
       | completely
        
       ___________________________________________________________________
       (page generated 2024-07-07 23:00 UTC)