Type | |
Stats | 1,540 0 |
Reviews | (177) |
Published | Apr 22, 2025 |
Base Model | |
Hash | AutoV2 393FC2A298 |
π FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8
A Good Reference for Parameters
Canny:
controlnet_conditioning_scale=0.7
,control_guidance_end=0.8
Depth: use
depth-anything
,controlnet_conditioning_scale=0.8
,control_guidance_end=0.8
Pose: use
DWPose
,controlnet_conditioning_scale=0.9
,control_guidance_end=0.65
Gray: use
Color
,controlnet_conditioning_scale=0.9
,control_guidance_end=0.8
Folder Structure
Organize your models as follows for FLUX dev and ControlNet workflows:
π ComfyUI/
βββ π models/
β βββ π diffusion_models/
β β βββ π flux-dev.safetensores # (or gguf)
β βββ π text_encoders/
β β βββ π clip_l.safetensors
β β βββ π t5xxl_fp8_e4m3fn.safetensors # (or t5xxl_fp16 or t5xxl_fp8_e4m3fn_scaled)
β βββ π vae/
β β βββ π ae.safetensors
β βββ π controlnet/
β β βββ π FLUX.1-dev-ControlNet-Union-Pro-2.0-fp8.safetensors
Note: Only one T5XXL text encoder is neededβchoose based on your hardware and quality/speed needs.
My FP8 Quantization Solution
With modest coding experience, I researched quantization and implemented FP8 compression for the model. The quantized version works perfectly for my needs, enabling all ControlNet workflows with much lower memory requirements and no noticeable quality loss.
Using The Quantized Model
Supports all original control types: pose, depth, canny edge, etc.
Drop any reference image, select control type, and generate results with lower memory usage.
Enhanced Prompting with OllamaGemini
I use my customOllamaGemini node for ComfyUIto generate optimal prompts. This, combined with the quantized model, creates a powerful, memory-efficient pipeline for creative image manipulation.
Alternatives for High-End Hardware
If you have a powerful GPU, the original unquantized model from Shakker-Labs offers higher fidelity at the cost of increased memory usage.
Looking Forward
I welcome community feedback! If you find these workflows helpful, please show your support with a π on the project. I'm open to opportunities and appreciate encouragement as I develop these resources.
Feel free to experiment with the model for your creative projectsβwhether using the memory-efficient quantized version or the original full-precision implementation!
π¨βπ» Developer Information
This guide was created by Abdallah Al-Swaiti:
For additional tools and updates, check out my other repositories.