Sign In

Wan 2.2 ComfyUI workflow favorites

31

Wan 2.2 ComfyUI workflow favorites

Edits:

  • 2025-12-12 - updated sampler/scheduler, high/low discussion

The world of AI video generation is moving fast. I'm often experimenting with new tools and new workflows to get better, faster, or just more interesting results. Because it would be painful to keep these ideas up to date in my various LoRA pages, I decided to just put my experiments and settings in one place that I could just link to. And so here we are!

I almost always embed my ComfyUI workflow in my videos, so you can always try saving a video you like and loading the workflow by dragging the .mp4 file into your Comfy workspace. I've chatted with a few people on Civitai who tell me that sometimes doesn't work for them. Perhaps Civitai is giving them a different video file than the one I uploaded? I don't know, but just be aware that solution won't always work.

The information below represents my current favorite configuration as of December 4, 2025. Capturing "my favorite settings" will always be a moving target, so don't be surprised if this article changes frequently.

Sage Attention

In the past, installing Sage Attention was a huge pain in the ass. I recently found a great YouTube video that showed the process and resources for very easy and simple installation that even works with the (currently) latest 5000 series of nVidia cards. The video does a great step-by-step walkthrough and shows how to install Sage Attention without a bunch of python virtual environment compilation nonsense. It also works for both ComfyUI portable install and normal install. The video is here.

ComfyUI Wan 2.2 setup

As of this date, all the Wan LoRAs I've published have been compiled for Wan 2.1. They work fine in Wan 2.2, but since they only really operate in the low noise (2nd stage) section of the render, my preferred settings are a little skewed to take that into account. Right now, I'm getting my best results with these settings...

High Noise

Low Noise

Overall

  • 10-12 steps

  • With Sage Attention turned on, on my 5070ti, I can render a 640x960 video at just under 30 seconds per step.

With Wan 2.1 LoRAs in use, you need to use more steps in the Low Noise segment in order to give the 2.1 LoRAs time to take effect. The textbook 50/50 split between High and Low doesn't work well in these cases. This 4 steps/8 steps split I've found gives enough steps in the High Noise to establish realistic action and movement while also allowing the Wan 2.1 LoRAs to do their thing when used at 1.00 strength. That is, you usually don't have to over-use those LoRAs at 1.50 or 2.00 just to see them work.

Sampler Update: After reading some recommendations for Wan 2.2 text-to-image workflows, I tried out this new sampler/scheduler combo for image creation and it made such a vast difference I decided to try it out in videos as well. The results are much more realistic, looking more like a cinematic movie than the videos I was able to make with uni_pc or euler. If you're making anime or something, that might not be great for you, but if you want realistic clips you should give this a try. It's not even any slower! I see some people using the clownshark whatever video renderer with their special samplers, but when I use it the thing is an order of magnitude slower and I just don't have the patience.

High and Low Steps Update: I've been playing with a 3-part workflow where the Low Noise steps are split into two sections. I reserve the final two steps for a Low Noise render with no LoRA at all except the lightning lora. This allows the Wan sampler to just clean up the final details and gives things a polish before saving. By those last two steps, pretty much all the motion and shapes are established, so it's just about rendering textures and fine details. I think the results look pretty nice, so I may publish an alternate version of the workflow with that setup sometime soon. Until then, you can swipe a workflow from one of my recent published videos like this one.

31