Sign In

Combined workflow (ComfyUI txt2img, Wildcards, Ollama, Pony, SDXL, Illustrious, Flux, Qwen, Z-Image Turbo)

Updated: Jan 31, 2026

tool

Type

Workflows

Stats

71

0

Reviews

Published

Jan 31, 2026

Base Model

Other

Hash

AutoV2
7E13F25898
default creator card background decoration
Third Birthday Badge
GE

geekier

"Combined Workflow" v6 (20260131)

This "Combined Workflow" (over 900KB and many nodes) is a ComfyUI txt2img workflow that performs SDXL, Pony, Illustrious, Flux1D, Qwen and ZImageTurbo generations with an optional prompt extension using Ollama and Wildcards processing.

It will generate an upscaled 16MP image as the final result while staying as close as possible to the original generation and produce CivitAI compatible metadata for each stage of the image generation.

- Stage 1: Generate the regular image, pass it to a selector (can be bypassed for batch generation)

- Stage 2: Upscale to 4MP using HiResFix

- Stage 3: Use Ultimate SD Upscaler (No Upscale) to redefine the components of 4MP image using either the original model and loras' specific characteristics or alternate models for speed and VRAM improvement. Faces and Eyes Detailer are then used on the resulting image.

- Stage 4: That result is sent to SeedVR2 to generate the final 16MP image and a color matching step is performed to make it as close as possible as the initial upscaled image.

A few notes on requirements:

1. the workflow loads all base models from the Load Checkpoint with name node. This is due to the need for the model_name field to be available to save CiviAI compatible metadata. One method to enable this is to create the extra_model_paths.yaml file to use with ComfyUI. Details on a similar process can be found https://github.com/mmartial/ComfyUI-Nvidia-Docker/wiki/Stability-Matrix-integration the process is the same with a different target ( (adapt /ComfyUI_models_folder and Path_to to match your setup):

comfy_extend:
  base_path: /ComfyUI_models_folder
  checkpoints: |
    diffusion_models

Make sure to add --extra-model-paths-config=Path_to/extra_model_paths.yaml to your ComfyUI command line arguments.

2. Detailers rely on Ultralytics models. Manual configuration is needed as detailed https://github.com/ltdrdata/ComfyUI-Impact-Subpack


The workflow contains a READ ME FIRST section that details some information about how it came to be, what it does and how to use it. Please refer to it for more details.

FYSA: list of used custom nodes:

 "cg-image-filter",
 "comfy-core",
 "comfy-image-saver",
 "ComfyLiterals",
 "ComfyMath",
 "ComfyUI_ADV_CLIP_emb",
 "ComfyUI_Comfyroll_CustomNodes",
 "comfyui_resolutionselectorplus",
 "comfyui_ultimatesdupscale",
 "comfyui-custom-scripts",
 "comfyui-easy-use",
 "comfyui-fbcnn",
 "comfyui-image-saver",
 "comfyui-impact-pack",
 "comfyui-impact-subpack",
 "comfyui-inspire-pack",
 "comfyui-kjnodes",
 "comfyui-lora-manager",
 "comfyui-ollama",
 "RES4LYF",
 "rgthree-comfy",
 "seedvr2_videoupscaler",

Note: using the nightly version of [LoraManager](https://github.com/willmiao/ComfyUI-Lora-Manager) (once 0.9.14 is out, this message will be obsolete)

Previous releases:

  • v5 (20260124): Added "Advanced" Ollama prompt + Added a new "LoRA randomizer" group + Implemented SEGS for Detaillers using "small" Face/Eyes/Hands selection logic

  • v4.1 (20260118): included setup requirements (diffusion models as checkpoints + Ultralytics required setup) in "READ ME FIRST" section + Changed to a common resolution selector

  • v4 (20260111): Addition of alternate samplers for Qwen and Z Image Turbo + removal of node failing to install on new Comfy installation + extended documentation: muted nodes-chain need to be manually selected

  • v3.1 (20251231): Hotfix for face/hand size

  • v3 (20251230): Additional detailers tweaks + alternative models for refiner/detailer steps

  • v2 (20251228): Trigger word selection + Detailers tweak + Usage clarifications

  • v1 (20251226): Initial release

Work-in-Progress release:

This workflow(Pony, SDXL, Illustrious only for now) is a Work-In-Progress combination, testing and tweaking of various elements from other workflows to generate an upscaled 16MP image as the final result while staying as close as possible to the original generation as generate CivitAI metadata for each stage of the image generation.

- Stage 1: Generate the regular image, pass it to a selector (can be bypassed for batch generation)

- Stage 2: Upscale to 4MP using HiResFix

- Stage 3: Use Ultimate SD Upscaler (No Upscale) to redefine the components of 4MP image using the original model and loras' specific characteristics. Faces and Eyes Detailer are then used on the resulting image.

- Stage 4: That result is sent to SeedVR2 to generate the final 16MP image.

There are many nodes involved in this workflow. Because of that I made use of multiple subgraphs to keep the workflow organized and easy to navigate.

Groups exists as organizational structure for the entire process and follow the Stage numbers.

It "works for me" but it might not be the best way to do it. Feedback is welcome.

PS: Despite my best effort, I still do not know how to have the "Nodes" used show on each image's page on CivitAI -- if someone knows how to please let me know.

Older releases:

The workflows I use with my Wildcards (see my account for those).

Within the zip is a README.md that explains the various use cases: