Type | |
Stats | 848 0 |
Reviews | (47) |
Published | Dec 13, 2024 |
Base Model | |
Hash | AutoV2 358FFF9355 |
Update:
I've added some other fp8 versions of FLUX.1 [dev] that aren't hosted on Civitai anymore, specifically fp8_e4m3fn and fp8_e5m2, in addition to the scaled fp8 FLUX.1 [dev] version I had originally posted.
The fp8_e4m3fn and fp8_e5m2 models were originally uploaded by Kijai here on Hugging Face, where they note that E5M2 and E4M3 do give slightly different results, but it's hard/impossible to say which is better.
Here's some info from this Reddit post regarding fp8_e4m3fn and fp8_e5m2:
FP stands for Floating Point. Any signed floating point number is stored as 3 parts:
Sign bit
Mantissa
Exponent
So number = sign * mantissa * 2^exponent
E5M2 means that 2 bits represent mantissa and 5 bits represent exponent. E4M3 means that 3 bits represent mantissa and 4 bits represent exponent.
E5M2 can represent wider range of numbers than E4M3 at cost of lower precision of the numbers. But the amount of different numbers that can be represented are the same: 256 distinct values. So if we need more precision around 0 then we use E4M3 and if we need more precision closer to min/max values then we use E5M2.
The best way to choose what format to use is to analyze distribution of weight values in the model. If they tend to be closer to zero we use E4M3 or E5M2 otherwise.
Original:
I haven't seen this uploaded on here.
This is the scaled fp8 FLUX.1 [dev] model uploaded to HuggingFace by comfyanonymous. It should give better results than the regular fp8 model, much closer to fp16, but runs much faster than Q quants. Works with the TorchCompileModel node. Note: for whatever reason, this model does not work with Redux nor with some ControlNet models.
The fp8 scaled checkpoint is a slightly experimental one that is specifically tuned to try to get the highest quality while using the fp8 matrix multiplication on the 40 series/ada/h100/etc... so it will very likely be lower quality than the Q8_0 but it will inference faster if your hardware supports fp8 ops.
From HuggingFace :
Test scaled fp8 flux dev model, use with the newest version of ComfyUI with weight_dtype set to default. Put it in your ComfyUI/models/diffusion_models/ folder and load it with the "Load Diffusion Model" node.