Sign In

ComfyUI Fantasy Portrait Workflow | Identity-Rich Motion from Portraits

Updated: Apr 2, 2026

toolnew

Type

Workflows

Stats

56

0

Reviews

Published

Apr 2, 2026

Base Model

Other

Hash

AutoV2
314ADA15F9
default creator card background decoration
RunComfy's Avatar

RunComfy

Photo → expressive cinematic face animation, fast and identity-accurate.

Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.

Open preloaded workflow on RunComfy

Open preloaded workflow on RunComfy (browser)

Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.

When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.

How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.

Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.


Overview

Turn any single photo into an expressive portrait animation with identity-accurate motion, adjustable framing, and cinematic expression quality.

Important nodes:

Key nodes in ComfyUI Fantasy Portrait workflow

FantasyPortraitModelLoader (#138)

Loads the FantasyPortrait weights. Swap here if you are testing a newer Fantasy-AMAP release. No tuning is required, but keep the precision consistent with your Wan model and VAE.

FantasyPortraitFaceDetector (#142)

Extracts portrait embeddings from the resized image. Good results come from well-lit, front-facing photos with minimal occlusion. If motion looks off, verify the input crop and try a cleaner source image.

WanVideoImageToVideoEncode (#151)

Builds Wan’s I2V conditioning from CLIP image features, your start image, and duration. Adjust width, height, and num_frames to control the render footprint and length. Longer sequences need more VRAM and time.

WanVideoAddFantasyPortrait (#150)

Fuses Fantasy Portrait identity/expressions into the I2V conditioner. Use this to keep the subject recognizably the same across frames while enabling nuanced expression changes. No parameters typically require adjustment.

WanVideoSampler (#149)

Generates the video latents. If you want sharper details, increase steps modestly. If motion drifts, reduce prompt complexity or try a different LoRA. Keep guidance coherent rather than verbose.

WanVideoTextEncodeCached (#155)

Encodes positive/negative prompts with UMT5-XXL. Use short, descriptive phrases. Overly strong negative prompts (for example, heavy “bad quality” stacks) can suppress expression.


Notes

ComfyUI Fantasy Portrait Workflow | Identity-Rich Motion from Portraits — see RunComfy page for the latest node requirements.