Type | |
Stats | 634 |
Reviews | (63) |
Published | Oct 3, 2024 |
Base Model | |
Hash | AutoV2 B46D682C22 |
V6 Hands issues - some users have noticed V6 is messing up hands badly quite often. I think I know what has caused it, and how to fix it, but I might not have time to get the new release uploaded until later in the week.
Sorry about that.
I have trained Flux Dev on my SDXL dataset and merged loras, correcting anatomy censorship and excessive bokeh/blurred backgrounds.
Jib Mix Flux Version 5 - It's Alive:
Improved photorealism. (Less likely to default to painting styles)
Fixed issues with wonky text.
More detailed backgrounds
Reconfigured NSWF slightly
fp8 V4 Canvas Galore:
better fine details and much better artistic styles, and improved NSFW capabilities.
fp8 V3.0 V3.1 - Clarity Key
I initially uploaded the wrong model file on the 21/10/2024, it was very similar but the new file since 22/10/2024 has slightly better contrast and was used for the sample images.
This version Improves detail levels and has a more cinematic feel like the original flux dev.
reduced the "Flux Chin"
Settings - I use a Flux Guidance of 2.9
Sampler = dpmpp_2m.
Scheduler = Beta or Custom Stigmas.
FP8 V2 - Electric Boogaloo: Better NSFW and skin/image quality.
Settings:
I find the best settings are a guidance and 2.5 and a CFG of 2.8 (although CFG does slow down the generation).
When using Loras these values may/will change.
Version: mx5 GGUF 7GB v1
This is a quantized version of my Flux model to run on lower-end graphics cards.
Thanks to @https://civitai.com/user/chrisgoringe243 for quantizing this, it is really good quality for such a small model.
There are larger-sized GGUF versions available here: https://huggingface.co/ChrisGoringe/MixedQuantFlux/tree/main
for mid-range graphics cards.
Version 2 - fp16:
For those with high Vram Cards who want maximum quality I have created this merge with the full fp16 Flux model. if you "only have 24GB of Vram" you will need to force the T5 text encoder onto the CPU/System RAM with this force node on this pack:
https://github.com/city96/ComfyUI_ExtraModels
Those people waiting for a smaller quantized model I am still looking into it.
Version 2:
Merged in 8 Step Hyper Lora and some others.
Settings:
I like a Guidance of 2 and 8-14 steps.
Resolution: I like a around 1280x1344
Version 1 : brings some of the benefits and look of SDXL with the massive prompt adherence benefits of Flux.
Settings:
I like a Guidance of 2 and 20-40 steps.