Wan 2.2 Animate Model Download and Setup (ComfyUI)
V2 confirmed to work on 12gb Vram up to Q8 25frames per loop
V2 confirmed to work on 4gb vram up to Q4 25frames per loop
To use this workflow in ComfyUI, download the models listed below and place them in the specified folders.
Make sure folder names and file names match exactly as shown to prevent load errors.
Main Diffusion Model (GGUF)
Model: Wan2.2-Animate-14B-GGUF
Download:
https://huggingface.co/QuantStack/Wan2.2-Animate-14B-GGUF
Put it here:
ComfyUI/models/diffusion_models/
Note:
This model is quantized in GGUF format. Choose the version that fits your GPU VRAM:
Q4_K_M → about 10–12 GB VRAM (balanced)
Q5_K_S → about 14–16 GB VRAM (recommended for mid-range GPUs)
Q6_K → about 20 GB or more VRAM (highest quality)
LoRAs
lightx2v I2V (animation motion LoRA)
Download:
https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16.safetensors
Put it here:
ComfyUI/models/loras/
WanAnimate relight LoRA (lighting and realism enhancer)
Download:
https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/LoRAs/Wan22_relight/WanAnimate_relight_lora_fp16.safetensors
Put it here:
ComfyUI/models/loras/
Text Encoder
umt5_xxl_fp8_e4m3fn_scaled.safetensors
Download:
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors
Put it here:
ComfyUI/models/text_encoders/
CLIP Vision Encoder
clip_vision_h.safetensors
Download:
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors
Put it here:
ComfyUI/models/clip_visions/
VAE
wan_2.1_vae.safetensors
Download:
https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors
Put it here:
ComfyUI/models/vae/
Required Custom Nodes
Install these custom nodes either through ComfyUI Manager or by cloning them manually into the folder:
ComfyUI/custom_nodes/
comfyui_controlnet_aux
https://github.com/Fannovel16/comfyui_controlnet_aux
ComfyUI-KJNodes
https://github.com/kijai/ComfyUI-KJNodes
ComfyUI-segment-anything-2
https://github.com/kijai/ComfyUI-segment-anything-2
IAMCCS-nodes (only v1)
https://github.com/IAMCCS/IAMCCS-nodes
ComfyUI-VideoHelperSuite
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
Execution Inversion Demo (looping mechanism)
https://github.com/BadCafeCode/execution-inversion-demo-comfyui
Quick Start
Load this workflow in ComfyUI.
Upload your reference image and input video.
Adjust the positive and negative prompts.
Make sure the green points and red points are set up properly in the detection subgraph
Make sure the width and height values are multiples of 16.
Run the workflow and your final animation will be saved automatically.




