Turn images into lifelike, moving characters with natural body and face motion.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This workflow helps you animate static images into complete motion videos that preserve character identity. By combining body pose transfer and facial mocap, it produces natural movement and expressive realism. You can take a driving video and a reference image to create lifelike character animations. It's especially useful for generating avatars, recreating performances, or storytelling projects. The workflow ensures seamless synchronization between reference identity and dynamic movements. With precise facial expressions and smooth body actions, outputs feel true to life. The process is efficient, creative, and designed for high-quality results.
Important nodes:
Key nodes in Comfyui Wan2.2 Animate workflow
VHS_LoadVideo(#63)Role. Loads the driving video, outputs frames, extracts audio, and reports the frame count for downstream consistency.
Tip. Keep the reported frame total aligned with the sampler’s generation length to prevent early cutoff or black frames.
Sam2Segmentation(#104) +PointsEditor(#107)Role. Interactive subject masking that helps Wan2.2 Animate focus on the performer and avoid background entanglement.
Tip. A few well‑placed positive points plus a modest
GrowMasktends to out‑stabilize complex backgrounds without haloing. See SAM 2 for video‑aware segmentation guidance. PaperDWPreprocessor(#177) +FaceMaskFromPoseKeypoints(#120)Role. Derive robust face masks and aligned crops from detected keypoints to improve lip, eye, and jaw fidelity.
Tip. If expressions look muted, verify the face mask covers the full jawline and cheeks; re‑run the crop after adjusting points. Repo
WanVideoModelLoader(#22) andWanVideoSetLoRAs(#48)Role. Load Wan2.2 Animate and apply optional LoRAs for relighting or I2V bias.
Tip. Activate one LoRA at a time when diagnosing lighting or motion artifacts; stack sparingly to avoid over‑constraint. Models • LoRAs
WanVideoAnimateEmbeds(#62) andWanVideoSampler(#27)Role. Fuse image, face, pose, and text conditioning into video latents and sample the sequence with Wan2.2 Animate.
Tip. For very long clips, switch to context‑window mode and keep its length synchronized with the intended frame count to preserve temporal coherence. Wrapper repo
…
Notes
Wan2.2 Animate in ComfyUI | Full Motion Video from Images — see RunComfy page for the latest node requirements.

