Type | Workflows |
Stats | 584 0 |
Reviews | (43) |
Published | Aug 27, 2024 |
Base Model | |
Hash | AutoV2 14FC419E91 |
--- the color correct node doesn't seem to work anymore, please leave a comment if you have a solution ---
Simplified Image 2 Video for a video made from 1 image, adaptation of Ipiv's Morph img2vid workflow.
Since I made some videos from single images of mainly waves I get a lot of questions in comments, on articles and via private messages about how this, how that, so now I can send them here! =)
The original image is 896x1152, which was a recommended portrait resolution for SDXL, I don't want to use upscaling, cause that takes forever, but also not end up with a small video, so I used half of that (576x448) as latent image.
Download the json file, install the missing nodes via the manager and go!
Required files and their folders
animatediff_models -> AnimateLCM_sd15_t2v.ckpt
animatediff_motions -> AnimatedDiffusion-Blank.mp4 (link)
loras -> AnimateLCM_sd15_t2v_lora.safetensors
loras -> Hyper-SD15-8steps-lora.safetensors (you can also use LoRA above twice)
checkpoints -> juggernaut_reborn.safetensors (or another SD1.5 model)
vae -> vaeFtMse840000EmaPruned_vae.safetensors
controlnet -> control_v1p_sd15_qrcode_monster.safetensors
IPAdapter PLUS (important for consistancy!)
ipadapter -> ip-adapter-plus_sd15.safetensors
/ComfyUI/models/clip_vision folder
Download and rename to "CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors"
https://huggingface.co/h94/IP-Adapter/resolve/main/models/image_encoder/model.safetensors
Download and rename to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
https://huggingface.co/h94/IP-Adapter/resolve/main/sdxl_models/image_encoder/model.safetensors
The prompt
Unlike the note says in the workflow, the prompt makes a difference!
I hope this will get you on the way, more info in the notes in from the original Ipiv workflow!