Sign In

Seamlessly Extend, Join, and Auto-Fill Existing Videos While Maintaining Motion - Wan VACE (2.1 & 2.2)

88

1.1k

42

Updated: Aug 23, 2025

toolwan2.1video extensionvace

Type

Workflows

Stats

270

0

Reviews

Published

Aug 20, 2025

Base Model

Wan Video 2.2 T2V-A14B

Hash

AutoV2
524746D277
default creator card background decoration
pftq's Avatar

pftq

Update 2025/08/19: Added variation for Wan 2.2, which largely works if you use the wan2.2_t2v_low_noise_14B file for the Model Loader node and has a much more photorealistic look. Wan 2.1 seems better for loras and a more neutral look though.


This is a workflow I posted earlier on Reddit/Github:
https://www.reddit.com/r/StableDiffusion/comments/1k83h9e/seamlessly_extending_and_joining_existing_videos/

It exposes a somewhat understated feature of WAN VACE, which is the temporal extension. It is underwhelmingly described as "first clip extension" but actually it can auto-fill pretty much any missing footage in a video - whether it's full frames missing between existing clips or things masked out (faces, objects).

It's better than Image-to-Video / Start-End Frame because it maintains the motion from the existing footage (and also connects it to the motion in later clips).

Watch this video to see how the source video (left) and mask video (right) look. The missing footage (gray) is in multiple places, missing face, etc that is all then filled out by VACE in one shot.

This is built on top of Kijai's WAN VACE workflow. I added this temporal extension part as a 4th grouping in the lower right. (so credits to Kijai for the original workflow).

It takes in two videos, your source video with missing frames/content in gray and a mask video that is black-and-white (the missing gray content recolored to white). I usually make the mask video by setting brightness to -999 or something to that effect on the original while recoloring the gray to white.

Make sure to keep it at about 5-seconds to match Wan's default output length (81 frames at 16 fps or equivalent if the FPS is different). You can download VACE's example clip here for the exact length and gray color (#7F7F7F) to use on the source video: https://huggingface.co/datasets/ali-vilab/VACE-Benchmark/blob/main/assets/examples/firstframe/src_video.mp4

In the workflow itself, I recommend setting Shift to 1 and CFG around 2-3 so that it primarily focuses on smoothly connecting the existing footage. I found that having higher numbers introduced artifacts sometimes.

Tips to maximize video quality and minimize loss of details or color-drifting:

  • Keep CFG 2-3 and Shift=1 to retain as much detail from the existing footage as possible.

  • Render at 1080p resolution to minimize color drift. CausVid helps reduce the render time by over 5x (8 steps instead of 50).

  • Color Match node in ComfyUI on MKL setting to get it reduced (not always applicable if the scene changes a lot).

  • Post correct in video editor the hue by about 2-7 and desaturate a little bit to counteract the drift.

  • Starting the scene initially with regular I2V when possible (no color drift) and masking in new changes with VACE (with feathering to blend pieces in and use as much as the I2V scene as possible with no color drift). Alternately extending in FramePack with Video Input or SkyReels V2 as well to get a "skeleton" of the scene without color drift and then patching changes in with VACE.

Models to download:

An additional video here for what it looks like loading in the video inputs.