Sign In

thatguyjames_uk MultiGPU T2I work flow. Wildcard and upscaler (ZIT and Qwen)

Updated: Feb 16, 2026

charactercomfyuiworkflow

Download

1 variant available

Archive Other

5.26 KB

Verified:

Type

Workflows

Stats

147

Reviews

Published

Feb 15, 2026

Base Model

ZImageTurbo

Hash

AutoV2
A0FB7BD823

This ComfyUI workflow is designed for efficient text-to-image (T2I) generation using the Z-Image Turbo model, with built-in support for multi-GPU setups (optimized for 2 GPUs via dedicated loaders for CLIP and VAE). It leverages wildcard prompts for dynamic and creative outputs, LoRA integration for style enhancements, a two-stage sampling process for quick initial generation followed by latent refinement, and post-processing upscale for higher-quality results. Optional image-to-image (I2I) support is included via an image loader and VAE encode node—simply connect it to the second KSampler for refining existing images.

Key Features:

  • Multi-GPU Optimization: Uses CLIPLoaderMultiGPU and VAELoaderMultiGPU to distribute workloads across devices (e.g., CUDA:0 and CUDA:1), speeding up encoding/decoding and inference on systems with 2+ GPUs.

  • Text-to-Image Generation: Starts with EmptySD3LatentImage for creating latents from scratch. Supports high-res outputs (e.g., 1080x1080 up to 1920x1088) with notes on Instagram-friendly sizes.

  • Wildcard Prompts & LoRAs: ImpactWildcardEncode handles dynamic prompts with wildcards (e.g., for randomized elements) and easy LoRA addition. Powered by Power Lora Loader (rgthree) for stacking LoRAs—start strengths at 0.5 and adjust up to 0.8.

  • Two-Stage Sampling:

    • First KSampler: Fast generation (5-9 steps, CFG 1-3) using combos like DPM++ SDE + Beta or Euler Ancestral + Beta.

    • Second KSampler: Low-denoise refinement (e.g., 0.22 denoise) on the latent output for polished results without full regeneration.

  • Image-to-Image Option: Load an image via LoadImage, encode it to latent with VAEEncode, and feed it into the second sampler for refinement/upscaling. (Not connected by default—enable via workflow edits.)

  • Upscaling: Applies ImageUpscaleWithModel (e.g., RealESRGAN x2) to the final image for crisp, high-res outputs.

  • Utilities:

    • FancyTimerNode: Tracks and displays total execution time.

    • PlaySound: Plays a notification sound (e.g., "finished.mp3") when the workflow completes.

    • SaveText: Exports populated prompts to a text file (e.g., for batch processing or archiving).

    • Built-in notes for optimal settings (resolutions, CFG tweaks, sampler combos).

  • Output Handling: Saves images to a dated folder (e.g., Multigpu\dd-MM-yyyy\...) and supports batch sizes.

Requirements:

  • ComfyUI with extensions: ComfyUI-Impact-Pack, rgthree-comfy, ComfyUI-MultiGPU, pysssss (for sound/text nodes), CRT-Nodes (for timer).

  • Models: Z-Image Turbo BF16 (UNET), Qwen 3.4B (CLIP), AE (VAE), RealESRGAN_x2plus (upscaler).

  • Tested on Python 3.12+ with CUDA support.

This workflow is ideal for users with multi-GPU rigs wanting fast, iterative T2I creation with refinement, splitting the vram to help with OOM errors and extras like timers/sounds for a streamlined experience. For I2I, tweak connections as needed. Feedback welcome—happy generating! 🚀