Sign In

QuadForge / Wan 2.2 I2V / SVI 2.0 Pro / automated multi-part ComfyUI Workflow

Updated: Jan 3, 2026

tool

Type

Workflows

Stats

758

0

Reviews

Published

Jan 2, 2026

Base Model

Wan Video 14B i2v 720p

Hash

AutoV2
45F308CFE0

Happy for any feedback if it works for you and of course if you tag the generations you made with it :).


Troubleshooting/best practice tips:

  • The SVI-LoRA is mandatory in each part for the workflow to function (link in the workflow).

  • The SVI-node does not seem to update with ComfyUI-manager, you`ll need to install it manually via Git (link in the workflow).

  • For Sage Attention, this guide on YouTube helped me a lot: "How to Install Sage Attention 2.2 on Latest ComfyUI Portable And Deskop Version".

  • Reduce the output bitrate to about 10 if you`re having trouble uploading to CivitAI.

  • Start with 2x, 3x, 4x turned off and at a lower resolution (e.g. k20) and then go from there.

v1.2:

  • Workflow reworked to support SVI 2.0 Pro, which uses the starting image and latent video from the previous clip to:

    • Improve transitions

    • Enable cross-clip memory for faces and other attributes

  • Resolution can now only be set once for part 1; it is carried over to all subsequent clips

  • Added switches for quickly switching between GGUF and diffusion/.safetensors models (e.g., SmoothMix). Deactivate LightX2v when using SmoothMix, as it is already embedded.

    • Note: SmoothMix changes faces and appearances quite a bit with this workflow compared to the GGUFs of base WAN.

  • Sage Attention and patched Torch are now part of the model inputs and can be bypassed globally.

  • Output folder structure updated: partial videos are now saved in the same folder as last-frame images.

  • Separate switches for last frame and all frames image preview.


v1.1:

  • Major declutter: Removed most crossing lines using set/get, subgraphs, and anything-everywhere nodes.

  • Single model input propagated everywhere.

  • Color-coded sections + cleaner layout.

CustomResolution Node Update:

  • 9:16 & 16:9 support with shared resolution tiers.

  • 24 & 30 FPS support.

  • Auto-rounding to multiples of 8 for manual inputs.

https://github.com/AugustusLXIII/ComfyUI_CustomResolution_I2V/tree/main

v1.01:

Changes:

  • Missing save last frame node added for scene 1.

  • Switches for image preview (reduces generation time) and interpolation added.


Update: My CustomResolutionI2V node is now available via ComfyUI-manager. You might need to update ComfyUI-manager if you can`t find it.


QuadForge WAN 2.2 – ComfyUI Workflow

A powerful and flexible img2vid chaining workflow optimized for WAN 2.2 GGUF with the lightx2v-LoRA (6 steps).

Start with a single input reference image and generate up to four consecutive video segments. Each segment automatically uses the last frame of the previous clip as its starting image for perfect continuity. All segments are stitched together seamlessly, with final video interpolation (RIFE) applied as the last step for a smooth, high-quality output.

Key features:

  • Adjustable number of generations (1–4) via simple switches

    • Tip: Start with 2x, 3x, and 4x turned off and add them until you like the result of the previous part. For this to work, you have to keep the seeds, LoRAs, and prompts fixed. Bypass the interpolation video node until you like your final result to save time.

  • Independent prompt and LoRA control for each segment

  • SageAttention enabled for approx. 40% speedup (bypass the two nodes in each step if not installed yet or if not supported by your GPU)

  • Automatic last-frame extraction and saving after every generation

  • Quick resolution and clip length adjustments using my own CustomResolution node (available via ComfyUI Manager): https://github.com/AugustusLXIII/ComfyUI_CustomResolution_I2V)

  • Fully automated stitching

  • RAM cleanup and model unload nodes for better stability

Ideal for creating long, consistent tracking shots, progressive scene reveals, or any multi-stage animated sequence from one starting image.

Notes:

  • Lots of spaghetti but it works (will be polished in future updates)

  • Framerate output in the CustomResolution node is currently fixed to 16 fps (I will make that adjustable as well)

  • How-to manual: TBD