Sign In

ComfyUI image-morphing low-VRAM LCM video animation workflow with poses

33

883

29

Type

Workflows

Stats

803

0

Reviews

Published

Mar 2, 2024

Base Model

SD 1.5

Hash

AutoV2
C73292CFCD
Celtic Creation Contest Participant
kurKtu's Avatar

kurKtu

With this workflow I was able to create a 1024x1024 12.5s 8fps video in 19min (and 768x768 in 13min) on an RTX 2060 mobile with less than 4GB VRAM.

This workflow is intended to create a video morphing between two IPAdapter image models. In addition OpenPose images can be used to support the animation.

The workflow iterates through the frames one-by-one with batch size 1 and therefore uses low VRAM.

The workflow creates only the png-frames, so the actual video needs to be created with an external tool like ffmpeg:

ffmpeg -framerate 8 -pattern_type glob -i 'vid4*.png' vid4.webm

Version 2 remarks:

  • the calculation going into the save image filename_prefix should be changed to "a + b.zfill(5)" for easier sorting of filenames.

  • Here are some poses used for generating my example video: https://civitai.com/models/329183 (filenames and paths need to be adjusted)

Version 1 remarks:

I am using the following OpenPose images in the workflow but it can be adapted to other Poses aswell:

https://civitai.com/models/162947/open-pose-dwpose-running-animation-figures

(path needs to be adapted)

The workflow is currently using SVD as CLIP_VISION model, but other CLIP_VISION models/loaders could also be used instead.

Version 2 changelog:

  • Improved layout with groups

  • normal CLIP_VISION loader instead of SVD

  • Poseframe number saved in filename

  • Depth controlnet

  • Image morphing bypass, Controlnet bypasses

  • IPAdapter stength control

Version 2.5 (In progress, probably to be released in June):

  • option for more consistency between generated frames

  • replacement of hacky jobiterator nodes by either manual queue or loop nodes from impact pack.