Sign In

Flux.1-Heavy-17B(GGUF)

25
379
11
Updated: Jan 9, 2025
base model
Type
Checkpoint Trained
Stats
249
0
Reviews
Published
Jan 8, 2025
Base Model
Flux.1 D
Hash
AutoV2
1AEB388908
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Flux.1-Heavy-17B GGUF Quantizations

πŸš€ Major Resource Optimization

Training

Original Requirements:

  • 40GB VRAM

  • 80GB System RAM

Now Accessible With:

  • As low as 8GB VRAM

  • 32GB System RAM

Description

These are memory-optimized GGUF quantizations of the original Flux.1-Heavy-17B model (by city96), making it accessible for systems with lower VRAM requirements. The original model is a 17B parameter self-merge of the 12B Flux.1-dev model, notable for being one of the first open-source 17B image models capable of generating coherent images.

Available Versions

Q6_K Version

  • VRAM Requirement: 16GB

  • Best balance of quality and performance

  • Recommended for users with RTX 3080/3090 or similar GPUs

Q5_K_M Version

  • VRAM Requirement: 12GB

  • Good quality with reduced memory footprint

  • Ideal for RTX 3060 Ti/3070/2080 Ti users

Q3_K_L Version

  • VRAM Requirement: 8GB

  • Most memory-efficient version

  • Enables running on mid-range GPUs like RTX 3060/2060 Super

Key Features

  • Maintains the core capabilities of the original Flux.1-Heavy-17B model

  • Optimized for different VRAM configurations

  • Enables broader hardware compatibility without requiring high-end GPUs

  • Smooth operation at specified VRAM levels

  • Dramatically reduced resource requirements compared to original model

Installation

  1. Download the preferred quantization version

  2. Place the GGUF file in your models directory

  3. Update your configuration to point to the new model file

Credits

  • Original model: city96 (Flux.1-Heavy-17B)

  • Base architecture: Flux.1-dev (12B parameter model)

Notes

  • Performance may vary depending on your specific hardware configuration

  • Choose the quantization level based on your available VRAM and quality requirements

  • Lower quantization levels may show slight quality degradation compared to the original model