NOTE: These workflows are licensed AGPLv3. Workflows I publish will be free, for all, forever.
https://www.gnu.org/licenses/agpl-3.0.en.html#license-text
TL;DR: The final result should be an 8 second perfectly looped clip. (Across 3 separate workflows).
Contained in the ZIP are three complementary workflows for progressively building a perfect loop using WAN 2.2 and WAN 2.1 VACE.
Through trial and error, these workflows were designed to give me the most consistent results when creating perfectly looped clips. The default settings are what works best for me and at a processing speed I find acceptable.
The process is as follows:
wan22-1clip-scene-KJ.json
Generate a WAN 2.2 I2V clip from a reference image
Optional prompt extension using Qwen2.5-VL
requires a locally running Ollama server
wan22-1clip-vace-KJ.json
Use clip from 1 in a V2V VACE workflow (WAN 2.1 for now)
last 15 frames of clip 1 become first 15 frames of transition
first 15 frames of clip 1 become last 15 frames of transition
Generates 51 new frames in-between
Optionally generate the prompt using Qwen2.5-VL
requires a locally running Ollama server
wan22-1clip-join.json
clip 1 + clip 2
Upscale to 720p
Smooth upscaled clips using WAN 2.2 TI2V 5B (absurdly fast + quality)
Interpolate to 60fps using GIMM-VFI (swap to RIFE for speed if you want)
Color correct using original reference image
The final result should be an 8 second perfectly looped clip.
There are more notes in the workflows. Please drop a comment if you have questions. They should work out-of-the-box given you have the required custom nodes, latest Comfy, and Pytorch >= 2.7.1. Links to the models used are in the workflow notes.
I opted for KJ-based workflows because Native is slower for me. Select the smallest model quants that fit within your VRAM when sampling (or system RAM), otherwise choose Q8 for the best quality. Be wary of ComfyUi-MultiGPU custom node. For me it's slower than Native, both of which are slower than KJ with basic block swapping.