Sign In

Wan2.2 - continuous generation (subgraphs)

245

4.6k

128

Type

Workflows

Stats

481

0

Reviews

Published

Aug 31, 2025

Base Model

Wan Video 2.2 I2V-A14B

Hash

AutoV2
400889ED83

v0.4;

Wasn't sure whether to call this a v0.3.1 or v0.4 but it took a while to troubleshoot some issues so 0.4 it is.

I would never guess tiled decode would cause this many issues but here we are. Most of quite visible artifacts were related to that. So I've switched to plain vae decode. I've also edited its variables. Switch to tiled and try to lower the variables if you get OOM but I'd suggest not messing with temporal_size since setting it less than generated frame count causes color changes around that frames. Same goes for tile size which causes "tiled" square burn ins.

I also suggest using the fp32 vae for least quality loss;
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_fp32.safetensors

Added a few more frames to temporal. Its still not perfect but as mentioned before, sometimes it works. And it looks good when it does.

There's still a little bloom effect going on around 1 minute mark, I switched to official lightx2v loras and they seem to fix the bloom a little but seem to make video overexposed as it progresses so I'll stick with the ones on kijai's repo, just be aware of the ghosting issue.

Not sure if quantization level effects it at all but higher quality quants are suggested for I2V model if your system can handle it.

I have not tested comfyui-frontend for subgraph updates so its safe to stick to version 1.26.2 for now using;

.\python_embeded\python.exe -m pip install comfyui_frontend_package==1.26.2

Now simply not connecting T2V output to anywhere will make the workflow skip it.

Also saw the new node added for wan. I'll be taking a look at that when I have more time.

Guess we aren't getting as many generations shared when I don't post on reddit :( What are all those followers been generating secretly, let us see them \o-o/


v0.3;

As I mentioned in previous edits please use comfyui frontend 1.26.2 until I can confirm a stable version in the future. You can find the command below under v0.2 or in v0.3 changelog.

This time I am here with a little improvement to final merged save feature. Part files are saved seperately and only merged when everything is completed. It should be more optimized and take less space in temp except final save could be a little memory consuming since evereything gets loaded but file sizes are not huge.

It's worth mentioning I've implemented a very basic temporal motion blur by actually blending previous 5 frames with various weights. It doesn't solve everything but sometimes transition looks seamless. You can share your experience if you have changed the weights and found something better in your generations shared.

I've also put 3 lora loader nodes in each part to give you an idea of how to load part specific loras. You can extend from there.

Also now there is an upscale subgraph using a simple upscale using model node inside final save subgraph or you can connect and use the basic upscale image by node I've put there. They are all bypassed since upscaling takes time and basic upscale doesn't change much.

And finally default video format is now vertical.

I'm kinda happy with how it turned out but prompt adherence decreases a little near the end, I dont know if it was the scene I was trying to get but using a higher Q clip might help.

Finally, I seem to be the rank #1 asset creator as of now and it means a lot :) Thanks to everyone, especially to those who share feedback and.. interesting generations.

Looking forward to more, happy generating!


v0.2;

  • Edit 2:

    • They seem to have broken the linked subgraphs and bypass without bypassing inside nodes so you need to rollback to comfyui frontend 1.26.2 for it to work for now;

.\python_embeded\python.exe -m pip install comfyui_frontend_package==1.26.2
  • Edit:

    • Found out transition frame gets duplicated again, fixed.

    • Added video to mp4 converter workflow if generation fails and you want to convert latest generated merged .mkv file to a smaller file that you can open.

    • Fixed suggested framerate being default over 16.

I went ahead and did a few more experiments. Since I kind of liked subgraphs I want them to a be a little more mainstream and known :) So I moved model loading as well as a few other options inside the subgraphs so the community can get their hands dirty and main page looks cleaner. It might be a little more complicated but I've added a few notes to show you around so don't hesitate to take a look.

I've implemented video merging. However since the files are merged with previous ones on each part dynamically the compression artifacts become a problem since first parts get compressed many times. To prevent this I've decided to save those parts with loseless quality ffv1-mkv. There's still a final save node at the end of the workflow to save the final output in h264 mp4 format.

With default settings a 30s generation at 832x480x81 x6 I2V parts generated files take less than 1GB of space in temp folder (which is cleaned every time comfyui restarts) and you can also delete it manually once the generation is complete.

I've also added interpolation node (bypassed by default) if you want to use it for the final output but it takes a while for 6x81 frames.

Another thing I've implemented was saving the last images to outputs folder. As well as implementing a global seed option that works for all samplers just in case if you want to continue an old generation or generate again with difference by some parameter changes.

You might want to clean those from outputs folder every once in a while if you do not need them anymore or disable them manually.

And thanks to the community for feedbacks both on here and reddit, I appreciate it. Would love to see some generations shared here as well :)

For now I might not publish another update in near future unless I implement a feature that is really significant like smoother transition etc.


This workflow is simply an experiment with comfyui's new subnodes.
It works by inputing last frame of a video generation as first frame input to next generation. But instead of huge spagetti of nodes you get a single I2V node which shares same ksampler sub nodes and iterates over and over again. You can think of it like framepack but in comfyui and also you can prompt each generation seperately. I kept the negatives common but that could be made dynamic as well.

I did not implement a stitiching option as I'm doing it in a basic video edit software but that could be added as well.

Edit: I forgot to save the output videos in output folders since I decided to not implement merging them later on. So the files end up in comfyui/temp folder. (this folder cleans up next time your run comfyui)

First video in 0.1 is 834x480, 1 3 3 sampled and took 23 minutes to generate. Second one is 624*368, 1 2 2 sampled and took 13 minutes. Second video was also generated in up to date version which has the fix for last_frame showing twice.

Don't forget to update your comfyui frontend since old versions were quite buggy . It might still have some bugs so be aware. Update command for portable;

.\python_embeded\python.exe -m pip install comfyui_frontend_package --upgrade

Sampling process is 1 + 3 + 3, where first step is no speed lora applied and others are basic high + low samplings with speed lora applied. Everything is pretty much customizable but remember "KSAMPLER SUBNODE IS USED BY ALL MAIN NODES SO CHANGES APPLY TO ALL OF THEM AT ONCE!". Same goes for I2V_latent subnode if you wanna change the output resolution and length of each part.

To extend the generation simply click on one of the I2V subnodes and copy, hit ctrl + shift + v to paste with all connections then you can connect the last image of your previous node into its start_image input. You can also connect load image to the initial I2V start_image input to bypass T2V generation (dont bypass nodes using keyboard shortcut, it might break subnode).

I couldn't get it to work with native models since it kept crashing on my system so everything is implemented as GGUF quantized models. Feel free to change the process, disable patch sage attention and compile model nodes from the model loader subnode but the speed hit is noticable.

On my 4070ti with sage++ and torch compile enabled T2V + 6x I2V (30s total) took about 20-25 mins.

Hope we get even better workflows from the community in the future :)