Updated: Dec 31, 2025
toolThis is my personal workflow which I started working on and improving pretty much every day since Z-Image Turbo was released nearly a month ago. I'm finally at the point where I feel comfortable sharing it!
My ultimate goal with this workflow is to make something versatile, not too complex, maximize the quality of my outputs, and address some of the technical limitations by implementing things discovered by users of the r/StableDiffusion and r/ComfyUI communities.
Features:
Generate images
Inpaint (Using Alibaba-PAI's ControlnetUnion-2.1)
Easily switch between creating new images and inpainting in a way meant to be similar to A1111/Forge
Latent Upscale
Tile Upscale (Using Alibaba-PAI's Tile Controlnet)
Upscale using SeedVR2
Use of NAG (Negative Attention Guidance) for the ability to use negative prompts
Res4Lyf sampler + scheduler for best results
SeedVariance nodes to increase variety between seeds
Use multiple LoRAs with ModelMergeSimple nodes to prevent breaking Z Image
Generate image, inpaint, and upscale methods are all separated by groups and can be toggled on/off individually
(Optional) LMStudio LLM Prompt Enhancer
(Optional) Optimizations using Triton and Sageattention
Notes:
Features labeled (Optional) are turned off by default.
You will need the UltraFlux-VAE which can be downloaded here.
Some of the people I had test this workflow reported that NAG failed to import. Try cloning it from this repository if it doesn't already: https://github.com/scottmudge/ComfyUI-NAG
I recommend using tiled upscale if you already did a latent upscale with your image and you want to bring out new details. If you want a faithful 4k upscale, use SeedVR2.
For some reason, depending on the aspect ratio, latent upscale will leave weird artifacts towards the bottom of the image. Possible workarounds are lowering the denoise or trying tiled upscale.
Any and all feedback is appreciated. Happy New Year! 🎉
