Wan2.2 Image-To-Video
---
π Workflow Online Test Link
πhttps://www.runninghub.ai/post/1956812813727621122/?inviteCode=rh-v1171
First Click: Click the link to claim 1100 RH Coins (for new users only)
Uses of the Coins:
1. Can use RTX 4090 for free to render workflows for 2 hours
2. Allows generating approximately 20 videos (resolution: 1280*720)
---
π¦ WAN Models to Download
---
π΄ Main WAN Model
Wan2_2-I2V-A14B-HIGH_fp8_e4m3fn_scaled_KJ.safetensors
π [Download Link](https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/I2V/Wan2_2-I2V-A14B-HIGH_fp8_e4m3fn_scaled_KJ.safetensors?download=true)
ποΈ Place in: ComfyUI/models/diffusion_models
AND
Wan2_2-I2V-A14B-LOW_fp8_e4m3fn_scaled_KJ.safetensors
π [Download Link](https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/resolve/main/I2V/Wan2_2-I2V-A14B-LOW_fp8_e4m3fn_scaled_KJ.safetensors?download=true)
ποΈ Place in: ComfyUI/models/diffusion_models
---
π£ WAN2.2-LIGHTING
Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1-high_noise_model.safetensors
π [Download Link](https://huggingface.co/lightx2v/Wan2.2-Lightning/resolve/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/high_noise_model.safetensors?download=true)
AND
Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1-low_noise_model.safetensors
π [Download Link](https://huggingface.co/lightx2v/Wan2.2-Lightning/resolve/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/low_noise_model.safetensors?download=true)
ποΈ Place in: ComfyUI/models/vae
---
π£ WAN VAE
Wan2_1_VAE_bf16
π [Download Link](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1_VAE_bf16.safetensors?download=true)
OR
wan2_1_vae_fp8
π [Download Link](https://huggingface.co/calcuis/wan-gguf/resolve/2a7520ea1e79d3f1f3f454a938613235633e1cba/wan2_1_vae_fp8.safetensors)
ποΈ Place in: ComfyUI/models/vae
---
π£ WAN Text Encoder
umt5-xxl-enc-bf16.safetensors
π [Download Link](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/umt5-xxl-enc-bf16.safetensors)
OR
umt5-xxl-enc-fp8.safetensors
π [Download Link](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/umt5-xxl-enc-fp8_e4m3fn.safetensors)
ποΈ Place in: ComfyUI/models/text_encoders
---
β οΈβ οΈβ οΈ Prompt Optimization
This feature can only be used on online platforms. For local use, you need to disable the prompt optimization node.
---
β οΈ Torch Compile Warning
If your setup doesnβt support torch compile, set attention mode to sdpa
in the model loader and bypas the Torch Compile settings and adjust the base_precision to just fp16
---
β οΈ Block Swapping
Block swapping helps if you have lower VRAM and/or get OOM. You can bypass it first and then update the blocks to swap to a higher number till you don't get OOM (up to 40) It will run slower so don't use if you have enough VRAM.
---
β οΈ Other known issues
If you get an error that has "FlowMatch" in it, please change your scheduler to uni_pc from FlowMatch_Causvid (or something else you like, dmp++_sde/beta is good too)