Type | |
Stats | 15,517 |
Reviews | (1,153) |
Published | Feb 1, 2025 |
Base Model | |
Training | Steps: 11,111 Epochs: 1,111 |
Hash | AutoV2 EC3DB42529 |
Hunyuan Video
Note Models marked as "Kijai" have the full vision model and blocks - They only work with the Kijai Nodes
Use Comfy Native models for Comfy native nodes
Converted to safetensors.
Comfy Native Node Users do not use Kajai TE use the Scaled Version
For CLIP-L they recommend using the full vision model. I have merged Zer0Int and SimV4 into the Vision Model
If you have low VRAM but high system RAM it is possible to use the FP32 CLIP and VAE (USE TILED VAE at 128px)
GBlue has posted a working FP8 workflow using native COMFY nodes on a 12GB card. (It will work on a 8GB card if the video latent size is around 320px)
Using the Kajai marked models on COMFY native will cause rainbow or black output.
I have posed a FP8 VAE that works with Comfy Native but may take more time then BF16 or even FP32