Type | Workflows |
Stats | 2,886 0 |
Reviews | (76) |
Published | Mar 1, 2025 |
Base Model | |
Hash | AutoV2 24A9BD8AF6 |
ALL simple workflow for FLUX
After several requests, here's a complete version of all my workflows, with a few additions.
What's included :
TXT to IMG
IMG to IMG
INPAINT
OUTPAIN
PuLID
ControlNet (OPENPOSE/HED/CANNY/DEPTH)
Upscaler
LoRA tester
IMG to TXT
All for 1080p and 2k screen, FP8 or GGUF models.
The easiest way to install all the files needed for workflow use is to use one of my installation scripts or my manager.
Put this folder in "ComfyUI\user\default\workflows" and after reload you can see all workflow here :
For manual installation :
Flux.1
Kijai/flux-fp8 at main (huggingface.co)
"flux1-dev-fp8" in ComfyUI\models\unet
black-forest-labs/FLUX.1-dev at main (huggingface.co)
"ae" in \ComfyUI\models\vae
comfyanonymous/flux_text_encoders at main (huggingface.co)
"t5xxl_fp8_e4m3fn" in \ComfyUI\models\clip
"clip_l" in \ComfyUI\models\clip
For GGUF
I recommend :
24 gb Vram: Q8_0 + T5_Q8 or FP8
16 gb Vram: Q5_K_S + T5_Q5_K_M or T5_Q3_K_L
<12 gb Vram: Q4_K_S + T5_Q3_K_L
GGUF_Model
city96/FLUX.1-dev-gguf at main (huggingface.co)
"flux1-dev-Q8_0.gguf" in ComfyUI\models\unet
GGUF_clip
city96/t5-v1_1-xxl-encoder-gguf at main (huggingface.co)
"t5-v1_1-xxl-encoder-Q8_0.gguf" in \ComfyUI\models\clip
Better flux text encoder
ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors · zer0int/CLIP-GmP-ViT-L-14 at main (huggingface.co)
"ViT-L-14-GmP-ft-TE-only-HF-format.safetensors" in \ComfyUI\models\clip
Upscaler
ESRGAN/4x_NMKD-Siax_200k.pth · uwg/upscaler at main (huggingface.co)
"4x_NMKD-Siax_200k.pth" in \ComfyUI\models\upscale_models
Controlnet
Canny: flux-canny-controlnet-v3.safetensors
Depth: flux-depth-controlnet-v3.safetensors
Hed: flux-hed-controlnet-v3.safetensors
https://huggingface.co/XLabs-AI/flux-controlnet-hed-v3/blob/main/flux-hed-controlnet-v3.safetensors
in \ComfyUI\models\xlabs\controlnets