Sign In

ZiT Studio

Updated: Jan 28, 2026

tool

Type

Workflows

Stats

1,041

0

Reviews

Published

Dec 31, 2025

Base Model

ZImageTurbo

Hash

AutoV2
7F3E9B5631

EDIT: I thought it would show up on the ComfyUI Manager but I guess I was wrong. In order to get the capitanZiT scheduler, you have to do the following:

Open command prompt in your custom_nodes folder and do:

git clone https://github.com/capitan01R/ComfyUI-CapitanZiT-Scheduler.git

My ultimate goal with this workflow is to make something versatile, not too complex, maximize the quality of my outputs, and address some of the technical limitations by implementing things discovered by users of the r/StableDiffusion and r/ComfyUI communities.

Features:

  • Generate images

  • Inpaint (Using Alibaba-PAI's ControlnetUnion-2.1)

  • Easily switch between creating new images, inpainting, and image2image in a way meant to be similar to A1111/Forge

  • Latent Upscale

  • Tile Upscale (Using Alibaba-PAI's Tile Controlnet)

  • Upscale using SeedVR2

  • Use of NAG (Negative Attention Guidance) for the ability to use negative prompts

  • Res4Lyf sampler + scheduler for best results

  • SeedVariance nodes to increase variety between seeds

  • Use multiple LoRAs with ModelMergeSimple nodes to prevent breaking Z Image

  • Generate image, inpaint, and upscale methods are all separated by groups and can be toggled on/off individually

  • (Optional) LLM Prompt Enhancer using Qwen3-4b-Thinking-2507

  • (Optional) Optimizations using Triton and Sageattention

Notes:

  • Features labeled (Optional) are turned off by default.

  • You will need the UltraFlux-VAE which can be downloaded here.

  • For NAG, you will need to clone from this repository since it adds support for Z-Image which the one from the ComfyUI Manager (ChenDarYen's) doesn't yet: https://github.com/scottmudge/ComfyUI-NAG

    • EDIT: Inpainting currently doesn't work using this repo, however I submitted a pull request which fixes it. You can either clone my repo, or overwrite the samplers.py in your existing NAG install with the one from my repo: https://github.com/pxllvr/ComfyUI-NAG

    • Open a command prompt in your custom_nodes folder and "git clone" from either scottmudge's or my repo. If anything changes, I will modify this note.

  • I recommend using tiled upscale if you already did a latent upscale with your image, and only if you want to bring out new details. If you want a faithful 4k upscale, use SeedVR2.

    • Tiled upscale takes much longer than latent upscale (10-15 minutes to 4k on my 3090).

  • For some reason, depending on the aspect ratio, latent upscale will leave weird artifacts towards the bottom of the image. Possible workarounds are lowering the denoise or trying tiled upscale.

    • EDIT: I've tried looking into this and it seems to be a hard limitation of the model itself and upscaling to 2048px. I've tried using DyPE, but wasn't satisfied with the results.

Any and all feedback, along with images posted using this workflow is greatly appreciated!