ghost
Sign In

AnimateDiff Workflow: Animate with starting and ending image

332
6.2k
99
Updated: Oct 12, 2023
toolcomfyui
Type
Workflows
Stats
5,718
Reviews
Published
Oct 1, 2023
Base Model
Other
Hash
AutoV2
42C1E6DBC7
default creator card background decoration
a1lazydog's Avatar
a1lazydog

Basic demo to show how to animate from a starting image to another. Chain them for keyframing animation.

Node Explanation:

Latent Keyframe Interpolation:

  • We have one for the starting image and one for the ending image.

  • The starting image will start on frame 0 and end roughly after the midway through the frame count. This is the batch_index_from and batch_index_to_excl fields.

  • The starting image has the strength going from 1.0 to 0.2. This tells the script to start with very strongly to use the starting image and have it not used so much as the frames goes on. This is the strength_from and strength_to

  • The interpolation describes how fast it should approach the strength_to.

  • The ending image has these field but in reverse. We want to start with a weak reference to the image, and strength it all the way at the end

Load ControlNet Model (Advanced)

  • We're using tile model here because we want the images themselves to be used

  • Feeding the latent keyframe interpolation through the timeframe keyframe node allows us to control how strongly the controlnet should apply as the frames goes on

Apply ControlNet (Advanced)

  • We leave the strength at one because we are actually controlling it through the Load ControlNet Model. Like-wise we leave the start / end percent to the defaults of 0.0 and 1.0 respectively

Animate Diff Module Loader

  • Be sure you have the right model here for the right checkpoint. I'm using a SD v1.5 checkpoint model, so I'm using a motion model for SD v1.5. If you are using SDXL, you will have to use another model

Animate Diff Sampler

  • frame_number - this tells the script how many frames should be generated. For your first test leave this at 16. Going over 16 will set it to continous animation mode. Depending on your machine you might need to make adjustments to the sliding_window_opts. See: ArtVentureX/comfyui-animatediff: AnimateDiff for ComfyUI (github.com)

  • denoise - leave this at 1.0. We're passing in an empty latent image, even though we are using control net's tile to pass in our reference images.

  • The rest of it should be however you like to use for your generation

Animate Diff Combine

  • frame rate - I find 12 to work the best, but change it to however you like depending if you want something smoother or choppier.

  • formate - video/h264-mp4 is what is accepted for civit.ai uploads