Download
1 variant available
Wan 2.2 Video Subgraph with all your video creation modes in one simple and easy tool!
Image to Video (start image only)
Text to Video (no input images)
Image to Image Video (start and end images)
Video to Image (end image only)
Runs with Wan 2.2 14B GGUF models and easily runs on 12 GB GPU.
BASIC USAGE
The Subgraph is designed to make your video generating easy. Basically connect an image to either the start image, the end image, both, or neither. Adjust parameters as needed (described below), enter your prompt and optional negative prompt, and run it.
INPUTS
start_image, end_image: Connect your loaded or generated image(s) here. Make sure ones you want to use are enabled in Parameters.
PARAMETERS
start_image_enable, end_image_enable: For an input image to be used its respective toggle must also be enabled (true). Any disconnected input image or one that sends a Null will automatically be treated as disabled (not enabled) regardless of the toggle. Disabled images will be ignored and not used at all in generations. These switches allow you to easily toggle generation modes without having to disconnect the image loaders.
Mode start_image end_image
I2V Enabled Disabled
I2V2I Enabled Enabled
V2I Disabled Enabled
T2V Disabled Disabledduration_seconds: Output video length in seconds. No need to manually calculate total frames. That's what we have computers for.
fps: Frames Per Second. Wan 2.2 generally outputs at 16 fps, though sometimes 24 fps. Changing this will not change how it generates the video, just how fast it looks. You can easily alter a saved video fps if needed.
video_size: Video will be size x size, adjusted for aspect_ratio, keeping as close to size ^ 2 total pixels as it can. So a video_size of 512 would be 1/4 megapixel (MP).
aspect_ratio: If you're using a start or end image you should select that here so the output matches your input. You can use a different aspect_ratio but it will crop the image to fit. For T2V you likely want to select a specific aspect_ratio. Though if you have a connected but disabled image you should be able to use it for aspect ratio still.
video_steps: With the Lightning Loras enabled they are designed for 4 steps. That will give you okay quality in many cases and will be the fastest. I generally use 8 as a better quality that's not too slow. 10 or 12 may help with more complex/busy scenes.
video_swap_%: This is where generation will switch from high noise (placement, motion) to low noise (details). 50% is a good default.
cfg: Per the usual. 1.0 is the fastest but may ignore your negative prompt. Still works well.
sigma_shift: 5.0 seems a safe setting when using the Lightning Loras.
seed, control_after_generate: Standard seed settings.
lightning_lora_enable: True to use the Lightning Loras, false to bypass them. If you disable them you will likely need to raise your video_steps to at least 20, cfg to 3.5, sigma_shift to 8.0, as starting points. However I haven't done well getting good results with them disabled.
clip_vision_enable: This provides the model with additional info about your image(s). This was more needed in Wan 2.1 but still usable here. Any disabled/missing image will automatically be ignored for this (so it won't add noise) but you can force it off here if you wish. Disabling may also save you a little VRAM if trying to run on tighter resources.
