Sign In

FLUX Dev/Schnell (Base UNET) + Google FLAN FP16/NF4-FP32/FP8

50
644
11
Updated: Nov 1, 2024
base modelflux1.snf4fp32
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
142
Reviews
Published
Oct 29, 2024
Base Model
Flux.1 D
Training
Steps: 100,000
Epochs: 100
Hash
AutoV2
B200A8A97D
SDXL Training Contest Participant
Felldude's Avatar
Felldude
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Full Checkpoint with improved TE do not load additional CLIP/TE

FLUX.1 (Base UNET) + Google FLAN

NF4 is my recommended model for quality/speed balance.

This model took the 42GB FP32 Google Flan T5xxl and quantized it with improved CLIP-L for Flux. To my knowledge no one else has posted or attempted this.

  • Quantized from FP32 T5xxl (42GB 11B Parameter)

  • Base UNET no baked lora's or other changes

  • Full FP16 version is available.

  • NF4 Full checkpoint is ready to use in Comfy with NF4 loader or natively in Forge (Forge has Lora Support and Comfy is taking 10x longer then Forge per IT - I prefer comfy but the NF4 support is garbage)

  • FP8 version recommended for comfy just use standard checkpoint loader. (NF4 is recommended for Forge as it looses less in Quantitation)

Again Do not load a separate VAE, CLIP or TE - FP32 Quantized versions baked in.

Per the Apache 2.0 license FLAN is attributed to Google