Create stable and consistent images from subject and object references.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
Updated 6/16/2025: ComfyUI version updated to v0.3.40 for improved stability and compatibility. The UNO workflow for ComfyUI brings the advanced image generation technology to RunComfy. This powerful model excels at creating highly consistent images when provided with reference subjects. Whether you need single-subject generation (a character in different scenes) or subject-object compositions (combining a subject with a specific object in one image), ComfyUI UNO delivers exceptional fidelity and quality. The model works effectively even on smaller cloud machines, making ComfyUI UNO image customization accessible to everyone. Perfect for product visualizations, character illustrations, creative compositions, and much more.
Important nodes:
Save Image
Notes
UNO for ComfyUI | Consistent Subject Generation — see RunComfy page for the latest node requirements.

