Sign In

fatberg_slim Image 2 Video Workflows

Type

Workflows

Stats

3,099

0

Reviews

Published

Nov 22, 2025

Base Model

Wan Video 2.2 I2V-A14B

Hash

AutoV2
F05D3A1463

🎉 Update: Version 2.0

Upgraded with redesigned systems for image loading and overall accessibility.

🌟 What's New

🖼️ Image Loading System

The workflow now features a flexible dual-mode image loading system:

  • Batch Image Loader: Automatically processes entire folders of images sequentially

  • Single Image Loader: Traditional manual image upload

  • Smart Switch Node: Seamlessly toggle between both modes without rewiring

How to use Batch Mode:

  1. Set the PrimitiveInt value to 0 with control set to increment

  2. Specify your source folder path in node LoadImagesFromFolderKJ

  3. Run the workflow once per image - it auto-increments and loads the next image each time

  4. Perfect for processing 10, 20, or 100+ images without manual intervention

To switch back to single image mode: Use the Fast Groups Bypasser to disable the batch loader and enable the standard Load Image node. No need for manual rewiring.

🎨 Upscaling Approach

After testing various upscaling methods, I've kept the upscaling workflow:

  • Kept the reliable ImageScaleBy node with Lanczos

  • 2.5x upscaling factor maintained

  • Why Lanczos? Testing showed that AI upscaling models like RealESRGAN can over-sharpen video frames, creating an artificial look. Lanczos provides a more natural, balanced result that's better suited for my video content.

⚡ Optimized Speed LoRAs

Updated for better performance with the latest distillation models:

High Noise LoRA changed:

  • OLD: Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16 @ 0.5

  • NEW: Wan_2_2_I2V_A14B_HIGH_lightx2v_MoE_distill_lora_rank_64_bf16 @ 1.5

Important: If you're using SmoothMix Wan 2.2, disable the Speed LoRAs node - SmoothMix doesn't require speed LoRAs.

📚 Enhanced Documentation

Added comprehensive new guide notes directly in the workflow:

  1. "Load Wan 2.2 Models" - Explains GGUF option for diffusion models

  2. "SmoothMix Wan 2.2" - Critical reminder about Speed LoRAs compatibility

  3. "Batch Image Loader" - Complete setup instructions with examples

  4. "Dynamic Prompts Text Box" - Improved guide with alternative setup options

All notes use proper markdown formatting for better readability!

đź”§ Technical Changes

GGUF Support Prepared

Two new UnetLoaderGGUF nodes added (disabled by default) for optional GGUF quantized models:

  • High Noise: Wan2.2-I2V-A14B-HighNoise-Q8_0.gguf

  • Low Noise: Wan2.2-I2V-A14B-HighNoise-Q8_0.gguf

Enable these if you want to use GGUF models instead of the standard safetensors.

Nodes Added:

  • Any Switch (rgthree) - Intelligent image source switching

  • UnetLoaderGGUF (x2) - GGUF model support

Nodes Removed:

  • PathchSageAttentionKJ (x2) - Removed

  • ModelPatchTorchSettings (x2) - Removed

Layout Improvements:

  • Better node organization for clearer workflow visualization

  • New groups added: "Load Image" and "Batch Image Loader"

  • Changed group layout: Now you can only enable and disable groups where it actually makes sense

⚠️ Migration Notes

If you're upgrading from v1.0:

  • Speed LoRA settings have changed - if you customized these, please review

  • Batch loader can now be disabled via Fast Groups Bypasser

  • All your existing prompts and LoRAs will work without changes

🎯 Recommended Workflow

For Batch Processing:

  1. Prepare a folder with your input images

  2. Set PrimitiveInt to 0, control to "increment"

  3. Enter folder path in LoadImagesFromFolderKJ node

  4. Set your dimensions (portrait/landscape) (don't forget to set them in the Wan Image to Video node as well!)

  5. Run workflow N times for N images - cool stuff.

For Single Images:

  1. Use Fast Groups Bypasser to disable batch loader

  2. Enable standard Load Image node

  3. Upload your image manually

  4. Run workflow as usual

🙏 Credits & Thanks

  • Thanks to you for supporting my endeavor on Civitai. It's just so much fun making videos for you and reading your comments. You're awesome.

  • Thanks to all the talented and knowledgeable LoRA creators. Without your work I wouldn't be able to do any of this.

Enjoy the new workflow! Let me know if you encounter any issues or have suggestions for v2.1! 🚀


Release notes v1.0:

My ComfyUI Image 2 Video Workflows

I’ve been asked a few times about my workflow, so here it is.

Some people had issues loading the workflow from my videos, so I decided to upload them directly.

These are the setups I use to create my I2V videos.

You’ll find notes inside the workflows explaining what some of the nodes do.

You’ll probably need SageAttention and Triton installed.

It might still work without them if you rewire a few nodes. I left a note about that in the workflow, but I can’t guarantee it’ll run properly.

I didn’t build these workflows completely from scratch. I started with an existing one (don't know which one exactly) and just added whatever seemed useful for my setup.

I’m not an expert, so please keep in mind that I can only offer limited support if something doesn’t work right.


A Little Disclaimer

Before you ask - there’s no magic combination of settings I’m using to create my videos.

It’s honestly more trial and error than you’d expect. Sometimes I let my PC run overnight and wake up to 40 clips…

Out of those, maybe 2-3 are worth keeping. The rest are either hilarious, nightmare fuel, or just plain trash.

So don’t be discouraged if your first results look weird. That’s part of the fun.


Missing Files?

If you get a message about missing files when loading the workflow, don’t panic.

You can usually find those files just by googling their exact file names and downloading them into the matching folders inside your ComfyUI installation. Missing custom nodes can be installed via ComfyUI Manager.

Please don’t ask me where to get the files — I can’t provide help with that.


About “missing unet/clip” Warnings

You might see messages like this when running the workflow:

clip missing: ['encoder.block.0.layer.0.SelfAttention.q.scale_weight', ...]

That’s normal. It just means the checkpoint you’re using contains extra parameters (e.g. from a slightly different CLIP/T5 variant or a weight-normed build) that don’t have a 1:1 spot in your current text encoder/UNet. ComfyUI logs them as “missing,” but the model still loads and runs fine. If your outputs look normal, you can safely ignore these messages.


The Workflows

There are two versions:

  1. I2V WAN MoEKsampler

  2. I2V WAN Ksampler

I mainly use the WAN MoEKsampler workflow.

If you want to know exactly what it does, check out the GitHub page:

👉 ComfyUI-WanMoeKSampler

In short: it automatically splits the two samplers based on sigma values from the tensor.

So you don’t have to do any manual splitting. Just set your steps and hit Run.

If you can’t or don’t want to use the WAN MoEKsampler, there’s also a version with the standard KSampler Advanced.

That one works the same way, except you’ll need to handle the step splitting yourself.


Output Info

Both workflows:

  • Save the last frame after VAE decode, before any upscaling — this gives you a clean base image for the next run.

  • Export both a 16 fps version and an upscaled + interpolated 32 fps version.

Just make sure to set your save paths on those nodes before running.