Newer Guide/Workflow Available https://civitai.com/articles/2379
AnimateDiff in ComfyUI Makes things considerably Easier. VRAM is more or less the same as doing 1 16 frame run! This is a basic updated workflow. To use:
0/I am using these nodes for animate diff/controlnet:
WORKFLOW IS ATTACHED TO THIS POST TOP RIGHT CORNER TO DOWNLOAD
1/Split frames from video (using and editing program or a site like ezgif.com) and reduce to the FPS desired. [If you want the tutorial video I have uploaded the frames in a zip File]
2/Download the checkpoint desired and motion module(s) (original ones are here: https://huggingface.co/guoyww/animatediff/tree/main the fine tuned ones can by great like https://huggingface.co/CiaraRowles/TemporalDiff/tree/main, https://huggingface.co/manshoety/AD_Stabilized_Motion/tree/main, or https://civitai.com/models/139237/motion-model-experiments )
3/Load the workflow and install the nodes needed.
4/You will need to ensure that each of the models is loaded in the nodes (check the load checkpoint node, the VAE node, the animatediff node and the load controlnet model node)
5/Put the directory of the split frames in the Load Image Node. Put in the desired output resolution. If you want to run all the frames keep image load cap to 0. Otherwise set image load cap (in the Load images node) to 16 and it will only do the first 16 frames
6/Change the Prompt! The Green is The Positive Prompt and the Red is the Negative Prompt. It is preset for my video with the blue haired anime girl. Then hit Prompt!
7/Wait.....(it can take a long time per step if you have a lot of frames but it doing everything at once so be patient)
8/Once done it will have frames and a gif (if you are getting a ffmpeg error it will just not make the GIF - you will need to install https://ffmpeg.org/ and look on youtube for how to add it to PATH). Please note the GIF is signficantly worse quality than the original frames so have a look at them.
9/Put the frames together however you choose!
Change around with the parameters!! The model and denoise strength on the KSampler make a lot of difference. You can add/remove control nets or change the strength of them. You can add IP adapter. Also consider changing model you use for animatediff - it cane make a big difference. Also add LORAs (how I did the Jinx one)
I hope you enjoyed this tutorial. Feel free to ask questions and I will do my best to answer. If you did enjoy it please consider subscribing to my channel (https://www.youtube.com/@Inner-Reflections-AI) or my Instagram/Tiktok (https://linktr.ee/Inner_Reflections )
If you are a commercial entity and want some presets that might work for different style transformations feel free to contact me on Reddit or on my social accounts.
If you are would like to collab on something or have questions I am happy to be connect on Reddit or on my social accounts.