Type | |
Stats | 276 |
Reviews | (27) |
Published | Aug 19, 2024 |
Base Model | |
Hash | AutoV2 F27EEC723F |
I chose the best from each kind
the best in small models (Q2_K)
in the middle (Q4_K_M)
the most close to original model is (Q8)
its up to you
I will be happy to make any quantization request for this merged version
DONE
For optimal results, we recommend trying this advanced workflow:
https://civitai.com/models/658101/flux-advance
basic
https://civitai.com/models/652981/gguf-workflow-simple
just download this and install missing nodes from manager
for t5 gguf
https://civitai.com/models/668417/t5gguf
what is the best of (4th gguf quantization)?
Key Features:
Merges the strengths of Flux1-dev and Flux1-schnell
big thanks for https://huggingface.co/city96 who start GGUF journy
if you face this error during loading gguf loader
newbyteorder was removed from the ndarray class in NumPy 2.0.
pip install numpy==1.26.4
Works on lower-end GPUs (tested on 12GB GPU with t5 fp16)
High-quality output comparable to more resource-intensive models