Sign In

Stabilizer IL/NAI

5.7k
69.9k
2.9m
1.9k
Updated: Jun 15, 2025
styleanimestyles
Verified:
SafeTensor
Type
LoRA
Stats
2,283
32,221
6k
Reviews
Published
Jun 5, 2025
Base Model
Illustrious
Usage Tips
Strength: 0.3
Hash
AutoV2
79DB2B34A5

Cover images are directly from the vanilla (the original) base model in a1111, at 1MP resolution. No upscale, no face/hands inpaint fixes, even no negative prompt. You can drop and reproduce those images in a1111. They have metadata.

Sharing merges using this LoRA, re-printing it to other platforms, are prohibited. This model only published on Civitiai and TensorArt. If you see "me" and this sentence in other platforms, all those are fake and the platform you are using is a thief.


Stabilizer

It's an all-in-one finetuned base model LoRA. If you apply it to NoobAI e-pred v1.1, then you will get my personal "finetuned" base model.

  • It focuses on natural lighting and details, stable prompt understanding and more creativity.

  • It is not an overfitted style LoRA (which only has dozens of images). The training dataset is big and very diverse. You will not get same things (faces, backgrounds, etc.) over and over again. It does not affect the creativity of the base model. It adds more.

  • You will get a clean and stable character/style as it should be. No matter if it's 2D or 3D. And most of the time you can use higher tag/LoRA strength without breaking the model.

  • The training dataset only contains high resolution images (avg pixels > 3MP, ~1800x1800). Zero AI image. So you can get real texture and details beyond pixel level, instead of fake edges and smooth surfaces with no texture (Because they were trained on AI images).

Why all-in-one? Because if you train 10 LoRAs with 10 different datasets for different aspects, and stack them up, your base model will blow up. If you train those datasets in one go, there will be no conflicts.

Why not finetune the full base model? I not a gigachad and I don't have millions of training images, so finetuning the entire base model is not necessary. Fun fact: Most (95%) base models out here just merges of merges of merges of tons of LoRAs... Only very few base models are truly fully finetuned models trained by truly gigachad creators.

(6/4/2025): Future plan

The dataset is getting bigger and bigger and it's difficult and expensive to train. illus01 v1.164 was trained from the start and took almost 35 hours. So

  • NoobAI version will not be updated. I decided to put my main time to improve illustrious v0.1 versions, which supports all NoobAI and later illustrious versions (v1, v2...).

  • I opened a donation page. If you like my model, and want to support me training on bigger cloud GPU, you can directly support me at here: https://app.unifans.io/c/fc05f3e2c72cb3f5

Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.

Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.

Have fun.


How to use

Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).

Version prefix:

  • illus01 = Trained on Illustrious v0.1. (Recommended, even for NoobAI)

  • nbep11 = Trained on NoobAI e-pred v1.1. (Discontinued)

Recommended usage:

NOT recommended models:

  • Base models with AI style (WAI's models, etc.). Because AI styles are super overfitted style, and will overlap any texture instantly.

  • Heavily merged base models (Merge of merges of merges...). May have 20+ LoRAs inside. And is gonna blow up.

Also recommended to:

  • remove LoRAs/embeddings for hands improvement. Don't need those.

  • remove custom VAE. May cause color shift.

Old versions:

New version == new stuffs and new attempt.

One big advantage of LoRA is that you can mix different versions in a second.

Here are the most distinctive old versions:

  • nbep11 v0.160 / illus01 v1.93: Has greater color enhancement. High contrast and saturation. Note: Later version removed this enhancement for better compatibility. Many base models has already merged similar LoRAs.

  • nbep11 v0.138 / illus01 v1.23: Weaker effect, has color enhancement, but maximum compatibility.

  • nbep11 v0.58 / illus01 v1.3: Pure 2D images. Useful if you are using a 2.5D model and you think characters are a little bit too realistic.

You can find more in "Update log".


FAQ

I can't get textures like the cover images.

  • This LoRA may have no effect with AI styles (trained on AI images). Because AI styles are super overfitted style, and will overlap any texture instantly. FYI. Cover images are from vanilla base model.

How to know if base model (or LoRAs) is AI style.

  • No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.

I got oversaturated image.

  • Your base model (or some of your LoRAs) was trained with "noise offset". It is a trick at training time to enhance extreme colors. It's very handy but problematic. This LoRA has native full color range dataset. If there is additional noise offset, it cause the output image "overshot".

I got color blobs.

  • Your model is gonna brake. You applied too many LoRAs or just the base model had too many merges (may already have 20+ LoRAs inside).

I got realistic faces on my anime characters.

  • I can guarantee that there are no realistic faces in the dataset. So this LoRA has zero knowledge of realistic faces. However, your base model may have (mixed with other realistic models, many models did this for better details).


Dataset

latest version or recent versions

~7k images total. Every image is hand-picked by me.

  • Only normal good looking things. No crazy art style that cannot be described. No AI images, no watermarks, etc.

  • Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.

  • All images have natural captions from Google latest LLM.

  • All anime characters are tagged by wd tagger v3 first and then Google LLM.

  • Contains nature, indoors, animals, buildings...many things, except real human:


Other tools

Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.

Touching Grass: Trained on and only on the real world dataset (no anime dataset). Has stronger effect. Better background and lighting. Useful for gigachad users who like pure concepts and like to balance weights themselves.

Dark: It can fix the high brightness bias in anime models. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.

Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).

Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.

Differences between Stabilizer:

  • Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.

  • Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.


Update log

(6/10/2025): illus01 v1.165c

Note: This is a special version. "c" stands for "colorful", "creative". I saw the feedbacks that recent versions are too "boring". I feel it too. This time let's try something new.

The dataset contains images that are very visually striking, e.g.:

  • Colorful. Complex lighting condition.

  • Objects, complex pattens everywhere.

Also changed some training settings. Cleaner and sharper effect than v1.164, even at high strength (>0.8).

However, be aware that:

  • Those "visually striking" images were not in previous versions because I thought they are too much for SDXL to handle. I removed them. In this version, they have a high weight.

  • This is not an improvement of v1.164. This is an attempt from another angle. You will get "visually striking", but at cost of "natural". E.g. This version cannot generate "pencil art" texture perfectly like v1.164.

  • Recommended vanilla base model, so you can use high strength and get full effect. I tested it on popular base models, all of them will suppress the effect, a lot.

(6/4/2025): illus01 v1.164

  • (Should have) better prompt understanding. Now each image has 3 natural captions, from different perspective. Danbooru tags are checked by LLM, only important tags are picked out and fused into the natural caption.

  • Anti-overexpose. Added a bias to prevent model output reaching #ffffff pure white level. Most of the time #ffffff == overexposed, which lost many details.

  • Changed some training settings. Make it more compatible with NoobAI, both e-pred and v-pred.

(5/19/2025): illus01 v1.152

  • Continual to improve lighting and textures and details.

  • 5K more images, more training steps, as a result, stronger effect.

(5/9/2025): nbep11 v0.205:

  • A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.

(5/7/2025): nbep11 v0.198:

  • Added more dark images. Less deformed body, background in dark environment.

  • Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.

(4/25/2025): nbep11 v0.172.

  • Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.

  • Better color accuracy and stability. (Comparing to nbep11 v0.160)

(4/17/2025): illus01 v1.121.

  • Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.

  • Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.

  • Other things just same as below (v1.113).

(4/10/2025): illus11 v1.113 ❌.

  • Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.

  • Trained on Illustrious v1.1.

  • New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.

  • Full natural language captions from LLM.

(3/30/2025): illus01 v1.93.

  • v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.

(3/22/2025): nbep11 v0.160.

  • Same stuffs in illus v1.72.

(3/15/2025): illus01 v1.72

  • Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.

  • Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.

  • Removed all "simple background" images from dataset. -200 images.

  • Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.

(3/4/2025) ani40z v0.4

  • Trained on Animagine XL 4.0 ani40zero.

  • Added ~1k dataset focusing on natural dynamic lighting and real world texture.

  • More natural lighting and natural textures.

ani04 v0.1

  • Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.

illus01 v1.23

nbep11 v0.138

  • Added some furry/non-human/other images to balance the dataset.

nbep11 v0.129

  • bad version, effect is too weak, just ignore it

nbep11 v0.114

  • Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%

  • Added a little bit realistic data. More vivid details, lighting, less flat colors.

illus01 v1.7

nbep11 v0.96

  • More training images.

  • Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.

nbep11 v0.58

  • More images. Change the training parameters as close as to NoobAI base model.

illus01 v1.3

nbep11 v0.30

  • More images.

nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.

  • Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.

illus01 v1.1

  • Trained on illustriousXL v0.1.

nbep10 v0.10

  • Trained on NoobAI epsilon pred v1.0.