Type | |
Stats | 416 0 |
Reviews | (29) |
Published | Aug 19, 2024 |
Base Model | |
Hash | AutoV2 F27EEC723F |
I sincerely apologize to everyone, but after deep reflection, I have decided to stop my activity on this platform after more than two years of engagement. This decision comes from my growing frustration with the lack of fairness, the preferential treatment of certain members over others, and, most painfully, the recent sense of discrimination against Arabs and Muslims. Additionally, the increasing presence of inappropriate content, coupled with the absence of an effective mechanism to regulate it, has made me deeply concerned about my reputation simply by being associated with this space.
Perhaps one day, I will return—if those in charge are changed and true fairness is restored.
I chose the best from each kind
the best in small models (Q2_K)
in the middle (Q4_K_M)
the most close to original model is (Q8)
its up to you
I will be happy to make any quantization request for this merged versionDONE
For optimal results, we recommend trying this advanced workflow:
https://civitai.com/models/658101/flux-advance
basic
https://civitai.com/models/652981/gguf-workflow-simple
just download this and install missing nodes from manager
for t5 gguf
https://civitai.com/models/668417/t5gguf
what is the best of (4th gguf quantization)?
Key Features:
Merges the strengths of Flux1-dev and Flux1-schnell
big thanks for https://huggingface.co/city96 who start GGUF journy
if you face this error during loading gguf loader
newbyteorder was removed from the ndarray class in NumPy 2.0.
pip install numpy==1.26.4
Works on lower-end GPUs (tested on 12GB GPU with t5 fp16)
High-quality output comparable to more resource-intensive models