Sign In

Daxamur's WAN 2.2 Workflows v1.2 (FAST | Upscale | Interpolation | GGUF | Easy Bypass | Audio)

106

2.6k

50

Updated: Aug 17, 2025

toollightningvideowanggufi2v

What did you think of this resource?

Type

Workflows

Stats

826

0

Reviews

Published

Aug 15, 2025

Base Model

Wan Video 2.2 I2V-A14B

Hash

AutoV2
F3D3832EE2
default creator card background decoration
Daxamur's Avatar

Daxamur

Daxamur's Wan 2.2 Workflows

If you want to support my work, Buy me a Coffee!
DM to inquire about custom projects.

-NEWS-

v1.3 is currently cooking - I'm experimenting with some novel methods of extending the video generation as various iterations of I2V and blending conditioning were just not cutting it in my opinion. Mechanisms for using these methods don't really seem to exist within ComfyUI today, at least not cleanly - combining this with the fact that several other quality of life functions that I'd like for v1.3 haven't had custom nodes created yet, I've opted to go ahead and create my own set of custom nodes. This will delay v1.3 a little bit, but will hopefully be well worth it!

v1.2 out now, utilizing a triple sampler method for far better quality, prompt adherence and motion. Using the default settings included with the flow, this flow takes about 2ish minutes longer (for me), but the results are pretty amazing in my opinion.

Thanks to @lug_L for pointing me to this method!

While I prefer v1.2's output to even base WAN 2.2 90% of the time - through extensive testing, I have found that with the enhanced adherence comes output degradation issues in certain scenarios such as large sweeping or quick camera movements, zoomed out characters or other niche concepts as opposed to the model essentially ignoring certain concepts entirely in earlier versions. I've come to accept that this is a limitation of WAN 2.2 itself in certain scenarios, and the lightx2v / seko Loras in others - and thus cannot be solved with workflow tweaks alone (currently - but an update will be dropped as soon as there is a resolution).

Notes

I've done my best to place most nodes that you'd want to configure at the lower portion of the flow (roughly) sequentially, while most of the operational / backend stuff sits at the top. Nodes have been labeled according to their function as clearly as possible.

Beyond that;

  • NAG Attention is in use, so it is recommended to leave the CFG set to 1.

  • The sampler and scheduler are set to uni_pc // simple by default as I find this is the best balance of speed and quality. (1.1> Only) If you don't mind waiting (a lot, in my experience) longer for some slightly better results, then I'd recommend res_3s // bong_tangent from the RES4LYF custom node.

  • I have set the default number of steps to 8 (4 steps per sampler) as opposed to 4, as here is where I see the most significant quality / time tradeoff - but this is really up to your preference.

  • This flow will save finished videos to ComfyUI/output/WAN/<T2V|T2I|I2V>/ by default.

I2V

  • For I2V, I find that generally Wan 2.2 does better if the input image's resolution is above the resolution you are sampling at (as opposed to resizing to fit the sampling resolution prior to executing) - but I haven't tested this super extensively.

  • The custom node flow2-wan-video will cause a conflict with the Wan image to video node and must be removed to work. I have found that this node does not get completely removed from the custom_nodes folder when removing via the ComfyUI manager, so this must be deleted manually.

GGUF

  • All models used with the GGUF versions of the flows are the same with the exception of the base high and low noise model. You will need to determine which GGUF quant best fits your system, and then set the correct model in each respective Load WAN 2.2 GGUF node accordingly. As a rule of thumb, ideally your GGUF model should fit within your VRAM with a few GB to spare.

  • The examples for the GGUF flows were created using the Q6_K quant of WAN 2.2 I2V and T2V.

  • The WAN 2.2 GGUF quants tested with this flow come from the following locations on huggingface;

MMAUDIO

  • To set up MMAUDIO, you must download the MMAUDIO models below, create an "mmaudio" folder in your models directory (ComfyUI/models/mmaudio), and place every mmaudio model downloaded into this folder (even apple_DFN5B-CLIP-ViT-H-14-384_fp16.safetensors).

Block Swap Flows

  • Being discontinued as I have found that the native ComfyUI memory swapping conserves more memory and slows down the process less in my testing. If you receive OOM with the base v1.2 flows, I'd recommend trying out the GGUF versions!

Triton and SageAttention Issues

  • The most frequent issues I see users encounter are related to the installation of Triton and SageAttention - and while I'm happy to help out as much as I can, I am but one man and can't always get to everyone in a reasonable time. Luckily, @CRAZYAI4U has pointed me to Stability Matrix which can auto-deploy ComfyUI and has a dedicated script for installing Triton and SageAttention.

  • You will first need to download Stability Matrix from their repository, and download ComfyUI via their hub. Once ComfyUI has been deployed via the hub, click the three horizontal dots to the top left of the ComfyUI instance's entry, select "Package Commands" and then "Install Triton and SageAttention". Once complete, you should be able to import the flow, install any missing dependencies via ComfyUI manager, drop in your models and start generating!

  • Will spin up a dedicated article with screenshots on this soon.

Models Used

T2V (Text to Video)

I2V (Image to Video)

MMAUDIO

Non-Native Custom_Nodes Used

Flows

T2V: UP

T2V + MMAUDIO: UP

T2V GGUF: UP

T2V Block Swap: UP

I2V: UP

I2V + MMAUDIO: UP

I2V GGUF: UP