Sign In

SAM 3D ComfyUI Workflow | Object Motion & Body Animation

Updated: Apr 2, 2026

toolnew

Verified:

Other

Type

Workflows

Stats

26

0

Reviews

Published

Apr 2, 2026

Base Model

Other

Hash

AutoV2
987F20D26A
default creator card background decoration
RunComfy's Avatar

RunComfy

Create realistic 3D motion and animation from static images instantly.

Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.

Open preloaded workflow on RunComfy

Open preloaded workflow on RunComfy (browser)

Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.

When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.

How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.

Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.


Overview

Note: This workflow may take up to ~15 minutes to launch the machine. We’re actively working on optimizations to reduce startup time.

This workflow helps you generate spatially coherent videos with accurate object and human motion. It uses structure-guided segmentation and depth reasoning to control 3D movement from a single image. Designers can switch between body or object modes for greater motion precision. You can build realistic animations without model fine-tuning. Perfect for controllable motion design, spatially consistent effects, and AI-driven video generation with natural dynamics.

Important nodes:

Key nodes in Comfyui SAM 3D ComfyUI workflow

LoadSAM3DModel (#44)
Loads all object-mode weights in one place, including depth, sparse structure generator, SLAT generator and decoders, plus texture embedders. If the weights are hosted on Hugging Face, enter your token and keep the provider set accordingly. Use automatic precision unless you have a reason to force a specific dtype. Once loaded, the same handles feed the entire object pipeline.

SAM3D_DepthEstimate (#59)
Estimates monocular depth, camera intrinsics, a point map, and a depth-informed mask from your input image. Good framing matters: keep the subject reasonably centered and avoid extreme crops for more stable intrinsics. Use the built-in point cloud preview to sanity-check geometry before committing to long bakes. The intrinsics and point map produced here are reused later for pose optimization.

SAM3DSparseGen (#52)
Builds a sparse structure and an initial pose by combining the image, the foreground mask, and depth outputs. If your mask is too loose, expect floaters and weaker structure; tighten edges for crisper results. The node also emits a pose object that you can preview to ensure orientation looks right. This sparse structure directly conditions the SLAT generator.

SAM3DSLATGen (#35)
Converts the sparse structure into a SLAT representation that is compact yet geometry-aware. A cleaner SLAT typically follows from a precise mask and good depth. If you plan to rely on mesh output over Gaussian, favor settings that preserve detail rather than extreme sparsity. The emitted SLAT path feeds both decoders.

SAM3DMeshDecode (#45)
Decodes SLAT into a watertight 3D mesh suitable for texturing and export. Choose mesh when you need topology that works in DCC tools and game engines. If you see over-smoothing or holes, revisit the mask and sparse structure density upstream. This path produces a GLB that will be baked and optionally pose-aligned later.

Notes

SAM 3D ComfyUI Workflow | Object Motion & Body Animation — see RunComfy page for the latest node requirements.