Sign In

Reakaaka's enhancer [Lumina]

21

66

2

Updated: Sep 24, 2025

style

Verified:

SafeTensor

Type

LoRA

Stats

66

0

Reviews

Published

Sep 21, 2025

Base Model

Lumina

Usage Tips

Strength: 1

Hash

AutoV2
AFE6B30A4E

Enhancement bundle for Lumina

Hello Lumina Image 2.0. Goodbye SDXL.


I know only gigachad users would see and read this description for now. So I gonna write this in a straight way.

I may rewrite this page later.


Background info

If you just heard about Lumina Image 2.0 and wonder the differences between it and SDXL:

  • Newer, released in Feb 2025. (vs July 2023)

  • Better VAE from Flux. (Mathematically 4x details)

  • Better model arch. Slightly larger (1.25x) model size, but much stronger performance. (5GiB vs 4GiB, bf16)

  • Slower than SDXL. Needs 3x computational time.

  • Quantized models work very well. No CNN layers. So you can run a Q8 model, which is only 3GiB, without noticeable downgrade in quality. Even Q4, if you want.

  • Much much better prompt understanding. (Gemma-2-2B vs CLIP-L/g, and new arch)

In short: Except the 3x computational time, Lumina 2 is just way better. After all, SDXL is just too old, and it's arch is not efficient as new arch.

This model

Same as my previous "stabilizer" model. (But with a less misleading name. lol)

  • Finetuned from NetaYume Lumina v3. But as a LoRA.

  • Trained with mixed dataset, which includes a little bit everything.

  • Try to alleviate the problem that anime model was biased to over simplified anime data.

  • Does not have default style (no default bias). You need to add style trigger words (artist name etc.)

Effect:

  • It should gives you better backgrounds, natural textures, better hands, less noise and maybe less deformed images. Still testing.

  • But don't expect it would make huge difference like my stabilizer LoRA. The lumina 2 base model does not have big overfitted problem like illustrious/NoobAI.

How to use

For those who want to give it a try:

  • You need Comfyui.

  • Download NetaYume Lumina v3 and this LoRA.

  • Use the workflow and guide from NetaYume Lumina.

  • Apply this LoRA with strength 1.

Tips of using Lumina 2, from my exp:

  • Unlike SDXL, it requires a very detailed prompt. Simple prompt gives deformed images. I use a LLM to automatically enhance my prompt.


Misc.

Training:

This LoRA was trained using kohya-ss/sd-scripts.

  • Lumina training script is not officially release yet, right now (9/21/2025) the script is under sd3 branch. You can use it out-of-the-box.

  • If you want to train a LoRA as well. Just follow the official document.

  • It works, but outside the official document there still has some small problems, some settings are not working and buggy. (need time to fix)

  • And some improvements can be made, e.g. +50% training speed. (I've already implemented and tested it, ready to pull)

  • I did not add system prompt. I don't think it's good idea, because original Lumina Image 2.0 trained without it.


Dataset

I'm still testing. For now I'm only using ~1k images, not my full dataset.

License 

This model is released under Apache License 2.0.

Additional terms: Selling or monetizing models that merged this LoRA is prohibited.