Sign In

Jib Mix Flux

376
6.0k
129
Updated: Nov 12, 2024
base modelnudes
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
1,670
Reviews
Published
Nov 4, 2024
Base Model
Flux.1 D
Hash
AutoV2
496EE27550
default creator card background decoration
Second Birthday Badge
J1B's Avatar
J1B
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

I have trained Flux Dev on my SDXL dataset and merged loras, correcting anatomy censorship and excessive bokeh/blurred backgrounds.
Jib Mix Flux Version 5 - It's Alive:

  • Improved photorealism. (Less likely to default to painting styles)

  • Fixed issues with wonky text.

  • More detailed backgrounds

  • Reconfigured NSWF slightly


fp8 V4 Canvas Galore:
better fine details and much better artistic styles, and improved NSFW capabilities.


fp8 V3.0 V3.1 - Clarity Key
I initially uploaded the wrong model file on the 21/10/2024, it was very similar but the new file since 22/10/2024 has slightly better contrast and was used for the sample images.

This version Improves detail levels and has a more cinematic feel like the original flux dev.

reduced the "Flux Chin"

Settings - I use a Flux Guidance of 2.9
Sampler = dpmpp_2m.

Scheduler = Beta or Custom Stigmas.


FP8 V2 - Electric Boogaloo: Better NSFW and skin/image quality.

Settings:
I find the best settings are a guidance and 2.5 and a CFG of 2.8 (although CFG does slow down the generation).
When using Loras these values may/will change.

Version: mx5 GGUF 7GB v1

This is a quantized version of my Flux model to run on lower-end graphics cards.

Thanks to @https://civitai.com/user/chrisgoringe243 for quantizing this, it is really good quality for such a small model.

There are larger-sized GGUF versions available here: https://huggingface.co/ChrisGoringe/MixedQuantFlux/tree/main
for mid-range graphics cards.

Version 2 - fp16:

For those with high Vram Cards who want maximum quality I have created this merge with the full fp16 Flux model. if you "only have 24GB of Vram" you will need to force the T5 text encoder onto the CPU/System RAM with this force node on this pack:
https://github.com/city96/ComfyUI_ExtraModels

Those people waiting for a smaller quantized model I am still looking into it.

Version 2:

Merged in 8 Step Hyper Lora and some others.

Settings:
I like a Guidance of 2 and 8-14 steps.

Resolution: I like a around 1280x1344

Version 1 : brings some of the benefits and look of SDXL with the massive prompt adherence benefits of Flux.
Settings:
I like a Guidance of 2 and 20-40 steps.