Sign In
Type
Checkpoint Merge
Stats
142
Reviews
Published
Oct 7, 2024
Base Model
Flux.1 D
Training
Steps: 10,000
Epochs: 25
Hash
AutoV2
04FA65C739
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Please post images to the model page; all buzz earned goes towards training LORA for the community

Struggling with flux prompts? try my flux prompt craft GPT bot

Do you use the GGUF? Try these workflows. GGUF T5 CLIP only | GGUF UNET+GGUF T5

Looking for a Latent node that's optimized for FLUX, SD3 & SDXL? Bob's FLUX Latent Optimizer Node

For optimal results, I recommend using 20-32 steps. (works with 8-12 as well but looks half as good)

Huge TY to @jurdn for helping me with the V4, V5, V6, and now V7 Q8 and Q4KS GGUF's!

Remember to also download the custom CLIP L! The current version of CLIP L is V5

What's New in V8?

V8 is here, and it marks a major shift in the evolution of Nepotism Fux. The biggest change? A complete removal of NSFW content. This decision wasn’t taken lightly, but it's a necessary step to push creative boundaries responsibly and align with modern ethical standards in AI. Read the full update article here.

Here’s the breakdown:

  • NSFW Content Removed: The model is now entirely focused on safe-for-work content. Users can still apply NSFW LoRAs externally, but the core model will remain SFW, prioritizing ethical AI usage.

  • Mild Photorealism Bias: V8 leans towards producing photorealistic images—though it CAN still reach anime fairly easily (examples in gallery). While this might not suit every user, the realism and fine details it achieves are top-tier. For those looking for anime or stylized outputs, using appropriate LoRAs can adjust the model’s default style if you have any trouble but I was able to hit a very wide range of artistic and anime styles in my testing.

  • Increased Sensitivity to LoRAs: V8 is more responsive to LoRA weights, allowing you to achieve great results with smaller adjustments. Lowering the LoRA weight to 0.2 or 0.3 can often yield the desired effect without overshooting the output.

  • Performance:

    • Cold Load (No LORAs): 1.03-1.08s/it

    • Cold Load (With LORAs): ~2.00-3.05s/it, dropping to 1.03-1.30s/it post-load.

(Tested on a 4080 GPU)

Why Nepotism Fux Stands Out:

- Balanced Precision: This merge uses FP8, producing images that closely resemble FP16 quality at a fraction of the time. Perfect for users with mid-range PCs who want Flux1Dev-level results without the resource drain.

- Efficiency: At 20 steps, generate high-quality images in just 16-22 seconds on a 4080 GPU, compared to the 80-150 seconds typical with Flux1Dev FP16.

V7 Recap:

  • LORA Compatibility Improvements: Enhanced efficiency and versatility with LoRAs.

  • NSFW Fine-Tuning: Further refinement of NSFW outputs without compromising on quality.

  • Improved Photorealism & Detailing: Finer details and sharper outputs.

  • Performance:

    • Cold Load (No LoRAs): 1.05-1.11s/it

    • Cold Load (With LoRAs): ~2.10-3.45s/it, improving to 1.05-1.35s/it post-load.

V6 Recap

  • LORA Compatibility Fixed: Addressed LORA-related issues, achieving full compatibility.

  • NSFW Improvements: Continued efforts to enhance NSFW content without compromising SFW content.

  • Detail and Accuracy: Further sharpened details, significantly better handling of hands, and enhanced image quality.

  • Speed and Performance: Improved speed and stability across various workflows, especially with LORAs.

  • Performance:

    • Cold Load (No LORAs): 1.00-1.25s/it.

    • Cold Load (With LORAs): ~3.15-5.30s/it, improving to 1.10-1.45s/it.

V5 Recap

  • Sharper fine details and an overall boost in quality.

  • Hands improved significantly—no moreLESS weird fingers!

  • Performance:

    • Cold Load (No LORAs): 1.07-1.25s/it.

    • Cold Load (With LORAs): ~3.25-5.45s/it, improving to 1.13-1.56s/it.

V4 Recap

  • Improved accuracy and stylization for better prompt-to-output consistency.

  • NSFW content refined (still no XXX).

  • LORA support improved.

  • Performance highlights:

    • Cold Load (No LORAs): 1.00-1.05s/it.

    • Cold Load (With LORAs): ~3.25-5.45s/it.

V3 Recap:

  • Custom LORAs and CLIP L from NepotismXL.

  • Recommended steps: 20-32 for optimal results.

  • Full Version: Consistent, high-quality outputs.

  • Pruned Version: More NSFW but also more artifacts and deformities.

Key Advantages:

  • Balanced Precision: FP8 offers images close to FP16 quality with reduced time.

  • Efficiency: High-quality images in 25-45 seconds on a 4080 GPU, versus 80-150 seconds using Flux1Dev FP16.

V2 Recap:

- A merge of 71% Flux.1 Dev and 29% Flux.1 Schnell, further refined with an enhanced CLIP.

- AIO Version: Combines NepotismFUX V2 DiT, ae vae, t5xxl, and NepotismXL V2 CLIP L into a single, all-inclusive safetensor file.

V1 Recap:

- The original NepotismXL CLIP L for SDXL, which paved the way for the enhanced capabilities in V2 and V3.

How to Get FUX'D Up:

- AIO Version:

- STEP 1: [CLICK HERE FOR THE FUX-ING WORKFLOW]

- STEP 2: PROFIT

- DiT & CLIP Separately:

- STEP 1: [CLICK HERE FOR THE FUX-ING WORKFLOW]

- STEP 2: PROFIT

Why 71% Dev & 29% Schnell?

- Enhanced Speed with Quality: The 71% Dev/29% Schnell merge delivers high-quality images at faster speeds, making it perfect for rapid prototyping and iterative design.

- Resource Efficiency: FP8 precision allows users with lower-end hardware to generate top-tier images without sacrificing quality.

- Scalability: This model adapts well to various hardware configurations, broadening its accessibility.

Considerations:

- Image Quality: While FP8 may reduce some image quality compared to FP16, the merge maintains a significant portion of Dev's high-quality output.

- Prompt Adherence: Strong prompt adherence is retained from the Dev model while benefiting from Schnell's speed and efficiency.

Solution:

- Image Quality & Prompt Adherence: By integrating a highly precise CLIP L from SDXL, the model achieves compositionally similar results to Dev at FP16, especially when using CLIP L with Dev FP16.

Tested & Running on:

- Nvidia 4090 (24GB VRAM) 64GB of RAM

- Nvidia 4080 (16GB VRAM) 32GB of RAM

- Nvidia 3080 TI (12GB VRAM) 32GB RAM

Note: This model isn't designed for XXX content (reliably anyways), though it can produce R and X-rated images.