Sign In

Step-by-Step Guide Series: ComfyUI - IMG to VIDEO

108
Step-by-Step Guide Series: ComfyUI - IMG to VIDEO

Step-by-Step Guide Series:
ComfyUI - IMG to VIDEO Workflow

This article accompanies this workflow: link

Foreword :

English is not my mother tongue, so I apologize for any errors. Do not hesitate to send me messages if you find any.

This guide is intended to be as simple as possible, and certain terms will be simplified.

Workflow description :

The aim of this workflow is to generate video from an existing image in a simple window.

Prerequisites :

If you are on windows, you can use my script to download and install all prerequisites : link

  • ComfyUI

Models :

Custom Nodes :

Don't forget to close the workflow and open it again once the nodes have been installed.


Usage :

The workflow is composed of 4 main parts :

  • Configuration : where you define what you want,

  • Files : what required for workflow operation,

  • LoRA : optional additional files influence the final result,

  • Ouput : video display and saving.

And two optional parts :

  • Upscale : allows you to increase the video resolution

  • Interpolation : allows you to generate intermediate frames for greater fluidity

Configuration :

Write what you want in the “Positive” node :

Write what you dont want in the "Negative" node :

Select image format :

The larger it is, the better the quality, but the longer the generation time and the greater the VRAM required.

Choose a number of steps :

I recommend between 15 and 30. The higher the number, the better the quality, but the longer it takes to generate video.

Choose number of frames :

A video is made up of a series of images, one behind the other. Each image is called a frame. So the more frames you put in, the longer the video.

Choose the guidance level :

I recommend to star at 6. The lower the number, the freer you leave the model. The higher the number, the more the image will resemble what you “strictly” asked for.

Choose a Teacache coefficients :

This saves a lot of time on generation. The higher the coefficient, the faster it is, but increases the risk of quality loss.

Recommended setting :

  • for 480P : 0.13 | 0.19 | 0.26

  • for 720P : 0.18 | 0.20 | 0.30

Choose a shift level :

This allows you to slow down or speed up the overall animation. The default speed is 8.

Choose sage attention :

Installing this option is quite complex, you can read my dedicated guide here. If you don't know what it is, don't enable it.

Choose a sampler and a scheduler :

If you dont know what is it, dont touch it.

Define a seed or let comfy generate one: :

Import your base image :

Don't forget that it will be reduced or enlarged to the format you've chosen. An image with too different a resolution can lead to poor results.

Attention: if you're using a 720p model, you'll also need to change the "Apply Tea Cache" node settings.

LoRA :

Select how many LoRA you want to use, and define it :

If you dont know what is LoRA just dont active any.

Files :

Choose your model:

Here, you can switch between Q8 and Q4 depending on the number of VRAMs you have. Higher values are better, but slower.

For VAE dont change it :

For clip dont change it :

Select an upscaler : (optional)

I personally use 4x_foolhardy_Remacri.pth · utnah/LDSR at main.

For clip vision, dont change it :

Output :

Here you can change the name and path of the output file and the number of FPS. The higher the FPS, the smoother the video :

I've already set the parameters I recommend (24fps), change according to your preference.

Upscale : (optional)

Here you can enable upscaling :

Choose a ratio for upscaling :

Too large a setting results in a decrease in quality.

Interpolation :

You can enable the setting to generate a smoother video.

Other :

A final option allows you to save the last frame in order to potentially use it to generate a new video following the one generated.


Now you're ready to create your video.

Just click on the “Queue” button to start:

Once rendering is complete, the video appears in the “stage 2” node.

If you have enabled upscaling, the result is in the "Upscaler node".


This guide is now complete. If you have any questions or suggestions, don't hesitate to post a comment.

108

Comments

Nowaytorun2's Avatar

thanks alot

yongyezhao126382's Avatar

thank a lot

pv8232914's Avatar

Где можно скачать Лора?

UmeAiRT's Avatar

На этом сайте доступно множество LoRA. Используйте фильтры, чтобы отобразить только те устройства, которые совместимы с WAN2.1.

svooooosh's Avatar

Works like a charm! However... I get an error "!!! Exception during processing !!! [Errno 22] Invalid argument" ...
"ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\nodes.py", line 130, in ffmpeg_process" ...
"OSError: [Errno 22] Invalid argument"

Still I get an output video and it seems to have no effect on it?

Show more
UmeAiRT's Avatar

There are often updates, have you tried updating this node? (if everything works well anyway, don't touch it x) )

Devilday666's Avatar

This is straight to the point, LOVE IT

soulmikwe02338's Avatar

I got a 16gb gpu 32gbram and a ryzen 5 3d.... it teils me i dont have enough vram.... wtf.....

UmeAiRT's Avatar

It depends on which model you chose, and the resolution too.

reptail44q's Avatar

Great job, how do you do a batch download?

UmeAiRT's Avatar

All file are save in ComfyUI/output/WAN/YYYY-MM-DD/
The original video, the upscaled, the interpoled and the last frame.

reptail44q's Avatar

I mean, how do you make it so that you load several images into your work one at a time?

UmeAiRT's Avatar

Personally, I run several tasks on an image, then change it and start again as many times as necessary. This fills the queue. The imported images remain in cache and I can let it run overnight, for example, with 20/30 different images.

reptail44q's Avatar

Thank you! From your experience, how can I improve my eyes? My eyes are not detailed and there is some noise.

shinjefoom's Avatar

Super!! thanks you!

What are your thoughts on sage attention? As I understand, it reduces quality but speed up things? Apparently it works better on RTX 5000 series.

UmeAiRT's Avatar

It even works on 3XXX models, but the installation is really not easy and has many steps. I'm thinking of writing a guide, but there are many different cases depending on Windows and the version of ComfyUI.

exx's Avatar

When I tried to run it, I got the error "Unexpected architecture type in GGUF file, expected one of flux, sd1, sdxl, t5encoder but got 'wan'". I installed all your dependencies from the bat file you posted on civitai. What would cause this issue?

UmeAiRT's Avatar

Yes, Your ComfyUi is not up to date enough.

delta45424155's Avatar

I have a 10700k, 64gigs ram, RTX 5080; Do you recommend 480 or 720 Quant Model? I'm also getting weird face deformation; how do I solve that?

UmeAiRT's Avatar

I'm getting very good results with the 480, but since you've got a lot of power, the 720 should work just fine. Strangely enough, I've had a lot more problems with the 720, even with professional graphics cards (L40S). What settings did you use? (steps, frames, ...)

CaulShivers's Avatar

I increase frames to 48, but my videos are still 2 second length. How so ?

UmeAiRT's Avatar

How many "frame_rate" you have in the Output node?

CaulShivers's Avatar

Preview - 16, Output - 16, Interpoled output -32, Upsacled output -16

UmeAiRT's Avatar

48/16 = 3 second. you have try with 16 more frames ?

EechiZero's Avatar

I'm using it, and it's a great job. It would be nice to include an 'ending frame' method in a future update to create looping videos.

UmeAiRT's Avatar

By "ending frame" do you mean giving the first and last frame? Because I already published a workflow like this

EechiZero's Avatar

I just saw it, too bad it doesn't work with GGUF yet, I will have to wait...

Another thing, lately, I've been getting an error. After generating the first video(720x480, Upscale Off), when I try to generate another one, the following appears:

SamplerCustomAdvanced

mat1 and mat2 shapes cannot be multiplied (769x4863 and 5120x5120).

I have to restart Comfy to generate again, but once I try to generate a second video or cancel the first one to start another, the error appears again.

Show more
UmeAiRT's Avatar

I have released a workflow update that should resolve this issue.

ombjerkvik's Avatar

Macbook pro M2 I got the following fault: "MPS: Unsupported Border padding mode"

UmeAiRT's Avatar

Sorry, but I don't have a Mac, so I can't do any compatibility testing for that platform.

If you had been born in the late 1800s, you would've built the first airplane.

100% genius. Thank you.