Type | Workflows |
Stats | 6,382 0 |
Reviews | (306) |
Published | Jan 27, 2025 |
Base Model | |
Hash | AutoV2 7D31F7F6E6 |
**Don't forget to Like 👍 the model. ;)
!!!This workflow is Obsolete!!! Some better options:
Wan2.1 (Best Quality also slowest, high Vram usage for great results, but have GGUF options)
https://civitai.com/models/1300201/wan-ai-img2vid-video-extend
Skyreels (Hunyuan Variant) (Good Quality, Mid Vram usage)
https://civitai.com/models/1278247/skyreels-hunyuan-img2vid
Hunyuan WF (Fastest one. I don't like the quality so much but I'm still testing. Lowest Vram usage and FAST lora!)
https://civitai.com/models/1328592/hunyuan-wf-img2vid-fast
*Just added a version without auto image resize due to the high amount of people having errors with it. The manual one will work 100%. Sorry about that :)
**Error: unsupported operand type(s) for //: 'int' and 'NoneType'" error Fix: https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/issues/269
Straightforward, this is an Image-to-Video workflow using the resources we have today (January 2025) with Hunyuan models. Using I2V LeapFusion Lora plus IP2V encoding, it can be very consistent and, in my opinion, as good as an older Kling version in terms of consistency. It’s not perfect, but it delivers solid results if used well, especially with videos of humans.
I kept it as simple as possible and didn’t include the faceswap node this time, but it’s a great addition if you’re planning to generate videos with human subjects. The VRAM usage depends heavily on the length and dimensions of the video you want to generate, but 12GB of VRAM is ideal to get good results.
As always, instructions and links are included in the workflow. Don’t forget to update Comfy and HunyuanVideoWrapper nodes!
That’s it. Leave a like and have fun!