Type | Workflows |
Stats | 1,370 0 |
Reviews | (38) |
Published | Aug 8, 2024 |
Base Model | |
Hash | AutoV2 07C978C8F5 |
Hey Creators!
I developed a workflow for my projects that I’m excited to share with you. It uses the new IP Adapter to seamlessly animate between multiple images, with rendering times ranging from just 3 to 15 minutes, depending on your GPU and the video's length. This is perfect for creating videos for use with Live Portrait.
To achieve a zoom-in/out effect, you can use the included Photoshop script or a Python script that acts as a small app to assist with image cropping and is very fun and useful, also, eliminating the need for Photoshop. There is a video walkthrough available to guide you through using both options.
See YouTube video's before using for important details.
Here are the resources I use in the workflow.
ControlNet Checkpoint: Download here
LoRA Models (google drive link): Download here
Animate Diff LCM Model: Download here (also in the google drive link)
Using these resources will help you achieve similar results. For optimal realism, I recommend using this checkpoint, and for a semi-realistic or toon effect, this model works exceptionally well.
Be sure to watch my YouTube video, where I guide you through the entire workflow and share important notes and settings.
Additional Notes:
Prompting can significantly impact your results. To maintain consistency across the video, try to align the prompt closely with the original images or use simple prompts like "Headshot photo, female, [Hair color], [background color]." Experiment with different prompts for varied effects. When morphing between distinct images, using terms like "HD" or "4K" often yields the best results.
Update:
The YouTube video has me using "Alternate batches" but that is incorrect. Please make sure you use Full batch in the ipadapter. Alternate will skip images.