Sign In

FLF2V - Wan 2.2 video generation first frame to last frame with frame interpolation

13

450

2

Type

Workflows

Stats

111

0

Reviews

Published

Aug 31, 2025

Base Model

Wan Video 2.2 I2V-A14B

Hash

AutoV2
D0D1335EDE
default creator card background decoration
M1

M14w

A fork of this workflow:

https://civitai.com/models/1687498/wan-2221-i2vflf2v-2-workflows-merge-fusionx-lora-2-sampler-florence-caption-last-frame-color-match

Please use the version 1.3, it makes more sense.

Version 1.3 has a better workflow that users with less memory should appreciate.

Version 1.4 has a feature that is interesting to prevent to reload the model several times: it can generate several cache files, then only after do the next part. However make sure to remember what you are doing, otherwise you are going to get bugs. Faster, but needs more discipline...

I made several typos in the instruction note that is included in the workflow. Sorry for the inconvenience, but it is a little bit too tedious to update a workflow here (I would need to submit the Nth "example dot png turns into a mighty Phoenix"), so here is a corrected version:


- Preparation:

Make sure you have all the needed files by enabling everything. If you are not using an SSD to store your files, you're likely wasting a lot of time.

If you run Part 1 and Part 2 separately, in this order, several times, the models will be loaded into memory over and over. It reduces the overall amount of RAM or VRAM needed, but the process is time-consuming. This method is useful when the alternative is a complete workflow failure, such as a crash near the end. I spend an additional 1 minute and 20 seconds per generation with this method, but I also save time by keeping everything in VRAM. For Wan 2.2 FLF2V, this is competitive; for Wan 2.2 T2V, it is much less so.

- First Part:

Select "Enable Part 1" and disable "Enable Part 2." Apply the usual settings. Run the workflow. It will only process the High CFG part. Once the first part of the workflow is finished, the .latent file will be saved to a cache directory. You will probably need to adapt the save path for this file.

- Second Part:

Select "Enable Part 2" and disable "Enable Part 1." Make sure the path to the latent file is correct. Run the workflow.


If you are using a USB SSD drive, make sure it is plugged on your fastest USB port:

USB 4 > USB3.2 > USB 3.1 > USB 3 > USB 2

It is advisable that you test the behavior and the speed of the hard drive on the USB port you are using, since the amount of time that you are going to waste is cumulative. If one of your USB port tends to crash, you will also waste time (my USB 3.0 is better than my USB 3.1).

Use the extra_model_paths.yaml file !

CFG: lcm (or Euler), faster.

dpmpp_2m: slower but I usually like the result better.

(I'll publish a more detailed analysis on my website, if I find the time, but this is a good summary).

Can you run both parts at the same time?

Answer: NO: but i am not sure why.

Please note that the "video combine" feature generates huge files, like 250mb for a couple of seconds. If you decide to check the "save output" feature of "save raw files", it will more likely generate files of 5mb. These files from "video combine" are generated from the ram, are then loaded in the ram (they run in the interface) and I am sure there is a more efficient way to combine frames (there is a GPU version of ffmpeg). You may find this feature useful (everything in one place), but if you have to tune your prompt: disable this "video combine" feature (and the related nodes) that is RAM and time consuming.

I use 4 steps High for a total number of steps of 10.

This is usually good, however I find that the videos are more creative with a lower number of High steps and a higher number of Low steps, like 2-14 or 3-12 (my perception).


Please refrain to put NSFW content here (mild suggestive will be allowed), I would like to link this page from mainstream websites.