Sign In

🍥 Wan 2.2 (GGUF) [i2v / FFLF] + [t2v] Workflow

110

1.7k

98

Type

Workflows

Stats

436

0

Reviews

Published

Dec 6, 2025

Base Model

Wan Video 2.2 I2V-A14B

Hash

AutoV2
C5AB90B621
default creator card background decoration
Lannfield's Avatar

Lannfield

⚠️Requires 9 custom node packs (10 custom node packs if using MMAudio)
⚠️ Uses GGUF Quantized Wan2.2 base model (for light weight or low VRAM)
⚠️ Uses Lightx2v distill loras for low steps generations
⚠️ Uses Subgraph
⚠️ Dense backend node processes
⚠️ Only tested in ComfyUI Desktop version
✅ UI Oriented workflow
✅ Switch between i2v/FFLF or t2v in one workflow
✅ 4-8 steps generations
✅ No Spaghettification
✅ Comes with other utility workflows
‼️Custom node ComfyUI-Swwan has been reported to cause problems in the workflow. (Make sure to uninstalled it, it has conflict with some of the switches in the workflow) thanks to all the users that help to troubleshoot this.
‼️Disable "Node 2.0" if you are using the latest ComfyUI Version.
🔆The latest ComfyUI Version have memory offloading of VRAM to RAM during video generation like block swap. (Blockswap currently not available on the latest version of ComfyUI for native nodes)

🍥Click Here(CivitAI Article) for Model/Lora Download links + Detailed Guide

Video posted above includes Embedded Workflow. (Download the video, drag into ComyfUI)
Merged videos will not load my main workflow.
Need ComfyUI-VideoHelpSuite custom node to open up workflow from video.


  • 6/12/25 - Added 🟢 🎼 t2v/i2v/FFLF v1 + 🟢🎼 i2v Only (+Batch) v1.0

    • Added MMAudio to main WF in separate download page.

    • Added/Updated 🟢🎼i2v Only (+Batch) v1.0 with MMAudio WF. ("🟢🎼i2v Only" downloads now contains WF with MMAudio and without )

    • Added MMAudio to Video 🛠️Post Processing WF.

    • Updated/Added Instructions for MMAudio installation/download/setup in (CivitAI Article).

    • Added Non-GGUF WF to all Downloads.

  • 3/12/25 - Added 🟢i2v Only (+Batch) v1.0 Workflow - (⚠️ i2v Only)

    • Capable of generating batch videos with different images and different dimensions or aspect ratios from a folder.

    • Does both i2v with single image input or i2v batch images from folder.

    • Single input image i2v function like normal with after generation 🛠️Post Processing options editing following usage guide.

    • ⚠️ Must set up all 🛠️Post Processing options before i2v batch videos generation. (Cannot edit or change 🛠️Post Processing options once generated)

  • 2/12/25 - Added 🔵Extend t2v/i2v V0.01 Beta Workflow (WIP - Not perfect)

    • Capable of generating video and continuing in segments

    • ⚠️t2v uses i2v model to extend/continue the video - Lora for t2v will not be able to work, as you will need to pass in i2v loras for continuation which are not implemented yet. unless the t2v loras are usable on both i2v & t2v.

    • ⚠️More extensions/segments = more degradation to character/faces of the video (less coherent over segments).

    • ⚠️FFLF is removed from the workflow due to conflict

    • ⚠️ Only can pass in a single prompt/lora stack and durations are fixed at the moment.


⌨️Usage for 🟢Main t2v/i2v/FFLF v1:

  1. Select t2v or i2v/FFLF Mode.

    • t2v: Adjust resolution for video

    • i2v: Drag and drop in the 1st image loader, choose to manually scale down image or disable it to use original image dimension

    • FFLF: Enable i2v mode and Enable FFLF and drag and drop image in the 1st & 2nd image loaders, choose to manually scale down image or disable it to use original image dimension (it will use 1st image dimension)

  2. Edit/Add 🍏Prompts, add ✨Loras if needed.

  3. Adjust ⌚Duration or just leave it at 5 seconds.

    • Set 🚶‍➡️Total Steps & 👞Split Steps. (4 Total/ 2 Split for Fast, 6 Total/ 3 Split for Good Quality, 8 Total/ 4 Split for Higher Quality)

  4. Click on "New Fixed Random" in 🎲Seed Node.

  5. Generate Video (▷RUN).

  6. Repeat step 4 & 5 until you get desire video shown in 🎥Preview.

  7. While using the same 🎲Seed and without changing anything that is not in 🛠️Post Processing, change/edit 🛠️Post Processing Options, Generate Video (▷RUN) again as many times and skip through KSampler.

  8. Video generated will be in your output folder - 🗂️ComfyUI/Output

🌀To start on a new video project, disable everything in 🛠️Post Processing and set 📺Final Video Mode to 🎥Preview.

💡If you have lower VRAM

  • Click on the "ComyfUI icon" for the menu

  • Go to "Settings"

  • On left bar, go to "Server-Config"

  • Scroll to "Memory"

  • Look for "VRAM management mode" option

  • Select "lowvram"

  • Restart ComfyUI


📱The Workflows includes:

  • GGUF model loaders

  • Sage Attention

  • Lora Stackers

  • WanNAG ( to strengthen negative prompt when CFG is 1)

  • Auto input for FPS (based on Frame Interpolation and speed)

  • Post Processing:

    1. Frame Interpolation

    2. Upscaler with Model

    3. Frame Trimming

    4. Video Speed adjustment

    5. Manual Adjust FPS (overrides auto fps inputs and speed adjustment)

    6. Video Sharpening

    7. Add Logo/Watermark

    8. Save Last Frame

    9. Frame Select

    10. MMAudio (WF with MMAudio)

💽Download Files includes:

  • Main i2v/FFLF/t2v Workflow

  • Postprocessing for video (non-interpolated video)

  • Videos Merger/Joiner

  • Simple Megapixel Calculator


🧩Custom Node:

(All Custom Nodes are available in Custom Node Manager)
Open workflow, open up 🧩Manager, click on "Install Missing Custom Nodes", Select all and install, restart ComfyUI.

  • ComfyUI-GGUF (Manually search in the Customer Node Manager and install this, if it does not show up in “Install Missing Custom Nodes”)

  • rgthree-comfy

  • ComfyUI-East-Use

  • ComfyUI-KJNodes

  • ComfyUI-VideoHelpSuite

  • ComfyUI-essentials

  • ComfyUI-Frame-Interpolation

  • ComfyUI-mxToolkit

  • WhiteRabbit

  • ComfyUI-MMAudio (only for workflow that uses it)

🪞Extra Utility Custom Node: (not required)

  • ComfyUI-Crystools (add real-time graph in ComfyUI to monitor %usage of CPU, GPU, RAM, VRAM)


🚧 Progression on Video Extend on hold.


🖥️ My Hardware Spec:
♠️ RTX 3090 Ti 24GB - 64GB RAM

📽️ My Video Generation Stats:
🚨 Sage Attention Enabled
🖼️ 720*960 (0.66 megapixels - Scaled down from 1440*1920)
🚶‍➡️ 8 Total Steps + 👞 4 Split Steps
💠 Per iterations/steps ≈ 47-49 secs (5 sec video)
⏱️ 5 seconds video / 81 Frames ≈ 7.5-8 mins (455 - 480 secs)
🛠️ 💎Sharpen + 🦄Logo Overlay + 4x 🎞️Interpolation + 1.5x 📐Upscale ≈ 2mins (120 secs)


For 🎼MMAudio Reference:

🔗(NSFW) Dead-Simple MMAudio + RIFE Interpolation Setup for WAN 2.2 I2V
😀SeoulSeeker(CivitAI)