Sign In

Hunyuan Video (Safetensors) - Comfy Native/Kijai -

2.2k
51.5k
1.2k
Updated: Mar 4, 2025
base modelvideohunyuantencent
Verified:
SafeTensor
Type
Checkpoint Trained
Stats
15,517
Reviews
Published
Feb 1, 2025
Base Model
Hunyuan Video
Training
Steps: 11,111
Epochs: 1,111
Hash
AutoV2
EC3DB42529
SDXL Training Contest Participant
Felldude's Avatar
Felldude
Tencent Hunyuan is licensed under the Tencent Hunyuan Community License Agreement, Copyright © 2024 Tencent. All Rights Reserved. The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate.
Powered by Tencent Hunyuan

Hunyuan Video

Note Models marked as "Kijai" have the full vision model and blocks - They only work with the Kijai Nodes

Use Comfy Native models for Comfy native nodes

  • Converted to safetensors.

  • Comfy Native Node Users do not use Kajai TE use the Scaled Version

  • For CLIP-L they recommend using the full vision model. I have merged Zer0Int and SimV4 into the Vision Model

  • If you have low VRAM but high system RAM it is possible to use the FP32 CLIP and VAE (USE TILED VAE at 128px)

  • GBlue has posted a working FP8 workflow using native COMFY nodes on a 12GB card. (It will work on a 8GB card if the video latent size is around 320px)


Using the Kajai marked models on COMFY native will cause rainbow or black output.

I have posed a FP8 VAE that works with Comfy Native but may take more time then BF16 or even FP32