Seamlessly Extend, Join, and Auto-Fill Existing Videos While Maintaining Motion - Wan 2.1 VACE
63
614
28
Update: This workflow also works with the new 14B VACE models and you can also swap out the T2V WAN for SkyReels V2 T2V if you want some of their cinematic fine-tuning. I also updated the Reference Images input to accept multiple images as a batch in case that was another hidden feature not obvious to most people. Lastly, for those interested in additional options for extending videos (while preserving motion), I recently added video input + end frame to FramePack as well. Another tool in the belt so to speak if you can't quite get the result you want with VACE (it's also based on Hunyuan so it can look a bit higher quality in some cases)
https://github.com/lllyasviel/FramePack/pull/491
Update 2: The new CausVid 14B T2V V2 lora works here too and is recommended to let you get very good results for only 8 steps (saves about 5x render time). I include the link to the lora in the models downloads list below. I don't advise using FusionX because it'll change the look with the other loras embedded in it.
This is a workflow I posted earlier on Reddit/Github:
https://www.reddit.com/r/StableDiffusion/comments/1k83h9e/seamlessly_extending_and_joining_existing_videos/
It exposes a somewhat understated feature of WAN VACE, which is the temporal extension. It is underwhelmingly described as "first clip extension" but actually it can auto-fill pretty much any missing footage in a video - whether it's full frames missing between existing clips or things masked out (faces, objects).
It's better than Image-to-Video / Start-End Frame because it maintains the motion from the existing footage (and also connects it to the motion in later clips).
Watch this video to see how the source video (left) and mask video (right) look. The missing footage (gray) is in multiple places, missing face, etc that is all then filled out by VACE in one shot.
This is built on top of Kijai's WAN VACE workflow. I added this temporal extension part as a 4th grouping in the lower right. (so credits to Kijai for the original workflow).
It takes in two videos, your source video with missing frames/content in gray and a mask video that is black-and-white (the missing gray content recolored to white). I usually make the mask video by setting brightness to -999 or something to that effect on the original while recoloring the gray to white.
Make sure to keep it at about 5-seconds to match Wan's default output length (81 frames at 16 fps or equivalent if the FPS is different). You can download VACE's example clip here for the exact length and gray color (#7F7F7F) to use on the source video: https://huggingface.co/datasets/ali-vilab/VACE-Benchmark/blob/main/assets/examples/firstframe/src_video.mp4
In the workflow itself, I recommend setting Shift to 1 and CFG around 2-3 so that it primarily focuses on smoothly connecting the existing footage. I found that having higher numbers introduced artifacts sometimes.
Tips to maximize video quality and minimize loss of details or color-drifting:
Keep CFG 2-3 and Shift=1 to retain as much detail from the existing footage as possible.
Render at 1080p resolution to minimize color drift. CausVid helps reduce the render time by over 5x (8 steps instead of 50).
Color Match node in ComfyUI on MKL setting to get it reduced (not always applicable if the scene changes a lot).
Post correct in video editor the hue by about 2-7 and desaturate a little bit to counteract the drift.
Starting the scene initially with regular I2V when possible (no color drift) and masking in new changes with VACE (with feathering to blend pieces in and use as much as the I2V scene as possible with no color drift). Alternately extending in FramePack with Video Input or SkyReels V2 as well to get a "skeleton" of the scene without color drift and then patching changes in with VACE.
Models to download:
models/diffusion_models: Wan 2.1 T2V (Pick 1, VACE's 14B/1.3B below):
14B FP16: https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/diffusion_models/wan2.1_t2v_14B_fp16.safetensors
14B FP8: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-T2V-14B_fp8_e4m3fn.safetensors
1.3B FP16: https://huggingface.co/IntervitensInc/Wan2.1-T2V-1.3B-FP16/blob/main/diffusion_pytorch_model.safetensors
1.3B BF16: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-T2V-1_3B_bf16.safetensors
1.3B FP8: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-T2V-1_3B_fp8_e4m3fn.safetensorsmodels/diffusion_models: WAN VACE (Pick 1, Match Wan's 14B/1.3B above)
14B BF16: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-VACE_module_14B_bf16.safetensors
14B FP8: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-VACE_module_14B_fp8_e4m3fn.safetensors
1.3B BF16: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-VACE_module_1_3B_bf16.safetensorsmodels/text_encoders: umt5-xxl-enc (Pick 1):
BF16: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/umt5-xxl-enc-bf16.safetensors
FP8: https://huggingface.co/Kijai/WanVideo_comfy/blob/main/umt5-xxl-enc-fp8_e4m3fn.safetensorsmodels/vae: WAN 2.1 VAE (any version): https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors
models/loras: WAN CausVid V2 14B T2V (for 14B only): https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_14B_T2V_lora_rank32_v2.safetensors
An additional video here for what it looks like loading in the video inputs.