Verified: 15 days ago
SafeTensor
Type | |
Stats | 142 |
Reviews | (13) |
Published | Oct 29, 2024 |
Base Model | |
Training | Steps: 100,000 Epochs: 100 |
Hash | AutoV2 B200A8A97D |
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
Full Checkpoint with improved TE do not load additional CLIP/TE
FLUX.1 (Base UNET) + Google FLAN
NF4 is my recommended model for quality/speed balance.
This model took the 42GB FP32 Google Flan T5xxl and quantized it with improved CLIP-L for Flux. To my knowledge no one else has posted or attempted this.
Quantized from FP32 T5xxl (42GB 11B Parameter)
Base UNET no baked lora's or other changes
Full FP16 version is available.
NF4 Full checkpoint is ready to use in Comfy with NF4 loader or natively in Forge (Forge has Lora Support and Comfy is taking 10x longer then Forge per IT - I prefer comfy but the NF4 support is garbage)
FP8 version recommended for comfy just use standard checkpoint loader. (NF4 is recommended for Forge as it looses less in Quantitation)
Again Do not load a separate VAE, CLIP or TE - FP32 Quantized versions baked in.
Per the Apache 2.0 license FLAN is attributed to Google