Sign In

GGUF_K: HyperFlux 8-Steps K_M Quants

44
877
16
Updated: Sep 3, 2024
base modelbasemodelhypersdgguf
Verified:
Diffusers
Type
Checkpoint Merge
Stats
502
0
Reviews
Published
Sep 3, 2024
Base Model
Flux.1 D
Hash
AutoV2
252635D4F3
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Warning: Although these quants work perfectly with ComfyUI - I couldn't get them to work with Forge UI yet. Let me know if this changes. The original non-k quants can be found HERE which are verified working with Forge UI.

[Note: Unzip the download to get the GGUF. Civit doesn't support it natively, hence this workaround]

These are the K(_M) quants for HyperFlux 8-steps. The K quants are slightly more precise and performant than non-K quants. HyperFlux is a merge of Flux.D with the 8-step HyperSD LoRA from ByteDance - turned into GGUF. As a result, you get an ultra-memory efficient and fast DEV (CFG sensitive) model that generates fully denoised images with just 8 steps while consuming ~6.2 GB VRAM (for the Q4_0 quant).

It can be used in ComfyUI with this custom node. But I couldn't get these to work with Forge UI. See https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050 for where to download the VAE, clip_l and t5xxl models.

Advantages Over FastFlux and Other Dev-Schnell Merges

  • Much better quality: you get much better quality and expressiveness at 8 steps compared to Schnell models like FastFlux

  • CFG/Guidance Sensitivity: Since this is a DEV model, unlike the Hybrid models, you get full (distilled) CFG sensitivity - i.e., you can control prompt sensitivity vs. creativity and softness vs. saturation.

  • Fully compatible with Dev LoRAs, better than the compatibility of Schnell models.

  • The only disadvantage: needs 8-step for best quality. But then, you'd probably try at least 8 steps for best results with Schnell anyway.

Which model should I download?

[Current situation: Using the updated Comfy UI (GGUF node) I can run Q6_K on my 11GB 1080ti.]

Download the one that fits in your VRAM. The additional inference cost is quite small if the model fits in the GPU. Size order is Q2 < Q3 < Q4 < Q5 < Q6. I wouldn't recommend Q2 and Q3 unless you absolutely cannot fit the model in memory.

All the license terms associated with Flux.1 Dev apply.

PS: Credit goes to ByteDance for the HyperSD Flux 8-steps LoRA which can be found at https://huggingface.co/ByteDance/Hyper-SD/tree/main