Type | |
Stats | 2,286 0 |
Reviews | (175) |
Published | Sep 15, 2024 |
Base Model | |
Hash | AutoV2 46DACF0E3B |
This model is converted to bnb nf4 by mixing Flux dev fp8 with VAE, text encoder T5xxl-fp8 and Clip-l. All included!
For those who don't care about size, also blended full dev fp16 with VAE, text encoder T5xxl-fp16 and Clip-l in NF4. All included!
From many observations, the FP8 is a bit more accurate in understanding the prompt. And having assembled this build and tested it, I came to the same conclusion.
This model is a bit more accurate than the familiar NF4 v2. I've done a lot of generation with complex prompts to assert this. And I am completely satisfied with the way this model understands me. With simple prompts I didn't notice any difference.
The model works well with lors in Forge, in which I create in. Don't forget to select in Diffusion in Low Bits: Automatic (fp16 LoRA).
In Forge I choose Euler or Flux Realistic samplers, Schedule type: Simple. CFG: 1, number of steps 20-30.
Generated outputs can be used for personal, scientific, and commercial purposes as described in the flux-1-dev-non-commercial-license