A pruned (8b) finetune of Flux schnell, with real CFG by Ostris https://huggingface.co/ostris/Flex.1-alpha
Quantized using GGUF by hum-ma https://huggingface.co/hum-ma/Flex.1-alpha-GGUF
This model can work either with CFG = 1, or with CFG > 1, using the FluxDisableGuidance node, 20+ steps are recommended.
This is just the transformer, so it requires T5XXL, CLIP-L and the flux VAE as well.