Sign In

WAN2.1-VACE-14B-6steps-GGUF All in One (t2v-i2v-v2v-controlnet-masking) simple ComfyUI workflow - 14Bv20250621

Loading Images

MODEL GUIDE

Use VACE model + CauseVid and/or Self-Forcing lora

14B for quality

1.3B for faster inference

Change GGUF Loader node to Load Diffusion Model node for .safetensor files

==========================================================================

14B VACE model GGUF + CauseVid lora (6 steps only)

https://huggingface.co/QuantStack/Wan2.1_14B_VACE-GGUF/tree/main

https://huggingface.co/julienssss/causevidlora/blob/main/Wan21_CausVid_14B_T2V_lora_rank32.safetensors

or

14B FusionX VACE GGUF (CauseVid merged)

https://huggingface.co/QuantStack/Wan2.1_T2V_14B_FusionX_VACE-GGUF

==========================================================================

1.3B VACE Self-Forcing model used (6 steps only, no CauseVid needed)

https://huggingface.co/lym00/Wan2.1-T2V-1.3B-Self-Forcing-VACE-Addon-Experiment/blob/main/Wan2.1-T2V-1.3B-Self-Forcing-DMD-VACE-FP16.safetensors

*1.3B VACE GGUF fails to give good result

==========================================================================

SWITCH GUIDE

Text to video = all OFF

Image reference to video = Image1 ON

Image to video = Image1 + FLF ON

First & Last Frame to video = Image1+2+FLF ON

FLF video control = Image1+2+VidRef+FLF+control ON

V2V style change = Image1+VidRef+controlnet ON

V2V subject change = Image1+VidRef+control+SAM ON

V2V background change = same as above+invert mask

Comments