Sign In

Automated Image to Video with Wan 2.1 and Interpolation

100

2.1k

64

Type

Workflows

Stats

403

0

Reviews

Published

Jul 2, 2025

Base Model

Wan Video 14B i2v 480p

Hash

AutoV2
3086A259DC

This workflow is based on https://github.com/kijai/ComfyUI-WanVideoWrapper

You can find all the required models on that GitHub page.

Additional LoRA for Lightning version.
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank32.safetensors

https://huggingface.co/alibaba-pai/Wan2.1-Fun-Reward-LoRAs/blob/main/Wan2.1-Fun-14B-InP-MPS.safetensors

https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_AccVid_I2V_480P_14B_lora_rank32_fp16.safetensors

A fun image-to-video model that's a bit different.

https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1-Wan-I2V-MAGREF-14B_fp8_e4m3fn.safetensors

There are notes in the workflow to help guide you on how the workflow works,

NOTE: There is a bug with the newest version of Plush, for the prompt enhancer. Some people had luck with uninstalling and reinstalling the custom node. For others, they had to downgrade the extension to an older version. https://github.com/glibsonoran/Plush-for-ComfyUI/tree/cb3c4777b54fc212770b2d91901e3a85d04e12d6

Updated workflow video.

Otherwise, please check out my tutorial video to help use the workflow.

Depending on the resolution and frames, the workflow will work for GPUs with 16GB of VRAM or less. You can also increase the block swap to put more of the model into system RAM instead of VRAM.