Pose-driven animation with identity stability and motion precision.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This pose-driven model enables creators to animate still characters using reference images and extracted human poses. You can transfer movement, maintain subject consistency, and control structure across video frames. Designed for animators and motion designers, it supports both image-to-video and video-to-video workflows. It ensures coherent motion and visual stability even during complex transformations. Perfect for crafting stylized character animations or motion studies with structural precision.
Notes
Workflow 1323 — see RunComfy page for the latest node requirements.

