[ComfyUI Workflow] All-in-one Text-to-Image Workflow (ControlNet, IP-Adapter, ADetailer, ELLA,...)
19
432
11
What is this workflow?
This is my personal ComfyUI workflow built upon what I've learned and used in SD1.5, expanded to cover all possible use cases for myself. On a 4GB VRAM + 16GB RAM system, with everything active, it can still run and provide strong results (if you're willing to wait a while).
This workflow includes so, so many things that it's better to view it for yourself. The following is a small subset of supported features of this workflow:
v-prediction support + RescaleCFG (disabled by default)
Controllable CLIP Skip
ELLA + ollama prompt upscale (SD1.5 exclusive feature)
Scalable ControlNet group (disabled by default)
Scalable IP-Adapter group (disabled by default)
Dynamic Thresholding (disabled by default)
2-pass txt2img
Perlin noise latent blend
Watermark removal using CLIPSeg + Lama cleaner
Scalable ADetailer group
Notifications and sounds
Preview chooser for batch images
Full Civitai metadata support
Prompt and LoRA scheduling
Multi-checkpoint setup (still in testing)
Wildcard (and wildcard file) support
...
Despite being built for SD1.5 first and foremost, this workflow can also use SDXL models or any all-in-one checkpoint models easily that is supported by prompt-reader-node. In these cases, ELLA would be completely useless, so do not enable it.
How to use this workflow?
Step 0: Get ComfyUI and ComfyUI-Manager
Step 1: Download the workflow file
Step 2: Import it into ComfyUI
Step 3: Download all missing nodes
Step 4: Grab all missing models (more details below)
Step 5: Have fun generating!
Model requirements
Basic
A checkpoint file (SD1.5, SDXL,...)
A VAE file (optional if baked VAE)
4x-AnimeSharp upscale model (or any other 4x upscale model of choice)
2x-AniScale2 upscale model (or any other combination of upscale model for the final upscale step)
ControlNet
All models are available inside ComfyUI-Manager under Model Manager. If you're using PonyXL or IllustriousXL, adapt your own solutions to it.
Depth ControlNet (using Depth-Anything preprocessor)
Lineart ControlNet (using AnyLine Lineart preprocessor)
OpenPose ControlNet (using DWPose preprocessor)
Depends on what other ControlNet model you add
Other features
IP-Adapter: an IP-Adapter model and its related CLIP-G (CLIP Vision) model. They should all be available under Model Manager
ELLA: See https://github.com/TencentQQGYLab/ComfyUI-ELLA?tab=readme-ov-file#orange_book-models
ADetailer/FaceDetailer: Most models should be available under Model Manager, except Anzhc's face YOLO, which can be acquired at https://huggingface.co/Anzhc/Anzhcs_YOLOs. Place the downloaded file inside ultralytics/segm
Ollama: See https://ollama.com/download and follow their instructions. After installing ollama on your system, grab https://ollama.com/huihui_ai/llama3.2-abliterate:3b