Edit March 19:
I2V workflows now including multi LoRAs
Added Skip Layer Guidance nodes
Slightly modified the workflows to have an easy swap between Non Tiled/ Tiled VAE Decode
Edit March 11:
All workflows are now including LoRA support.
All workflows now support TeaCache and SageAttention
All model downloads are now using huggingface-cli, this enabled faster model downloads, I was able to get a template running in 5 minutes with full I2V and T2V model downloads!
Edit March 8:
Pushed some code changes that turn on video preview by default, videos will show as they generate so you can detect bad movement and abort.
Added automatic LoRA downloading using my CivitAI downloader
Learn how to use it here: https://civitai.com/articles/12333
Edit March 3:
Updated the non-native workflows to support TeaCache – Image generation should now be faster!
The workflows that support TeaCache are
Wan_Video_Image2Video-Upscaling_FrameInterpolation
Wan_Video_Text2Video-Upscaling_FrameInterpolation
Wan_Video_Video2Video-Upscaling_FrameInterpolation
Also, Fixed Video2Video workflow
This video covers deploying a RunPod template that provides a complete local video generation package with Wan14B and ComfyUI with workflows included for txt2vid, img2vid and vid2vid.
Deploy the template here:
https://runpod.io/console/deploy?template=758dsjwiqz&ref=uyjfcrgy
Remember to change the environment variables to True to download the models
For those of you who just want the workflows:
i2v: https://civitai.com/models/1297230/wan-video-i2v-upscaling-and-frame-interpolation
t2v:https://civitai.com/models/1295981?modelVersionId=1462638
v2v:https://civitai.com/models/1318132/wan-video-v2v-upscaling-and-frame-interpolation