Type | Workflows |
Stats | 525 |
Reviews | (87) |
Published | Oct 18, 2024 |
Base Model | |
Hash | AutoV2 47FB059C9B |
This ComfyUI Workflow allows Audio Reactivity for AI animation in an EASY way π
DOCS, WF, EXPLANATIONS : https://github.com/yvann-ba/ComfyUI_Yvann-Nodes
Please Give a β on GitHub
it really helps pushing forward the pack and share workflows
-
This Workflow lets you sync multiple image inputs with your audio, making your animations come alive by switching between images based on beats (like bass, drums, vocals...) with smooooth transitions (or sharp transitions if you're a techno guy)
It uses an audio implementation of IPAdapter to smoothly blend styles based on your audio-reactive images and includes ControlNet to help shape your animation based on your cool input video :Nice:
The WF is based on Stable Diffusion 1.5 and HyperSD (8 steps), itβs designed to create high-quality animations efficiently, even on low VRAM/GPU setups (like 6gb VRAM )