Sign In

Stabilizer IL/NAI

7.5k

94.6k

3.6m

2.6k

Updated: Aug 15, 2025

style

Verified:

SafeTensor

Type

LoRA

Stats

9,879

153.7k

50k

Reviews

Published

Jun 21, 2025

Base Model

Illustrious

Usage Tips

Strength: 0.3

Hash

AutoV2
B2BEBDBD77

Note about cover images:

  • They are the raw output from the vanilla (the original) base model, at default 1MP resolution. No upscale, no plugins, no inpainting fixes. Have metadata, 100% reproducible.

  • There is no style in this LoRA. All styles you see are from the base model. Refer to xy plots for the effect of this LoRA.


Latest update:

(8/11/2025) Updated main description. Lots of cleaning up and rewrite.

  • + why and how this LoRA works.

  • + why AI style base models are problematic.

(7/24/2025) New Stabilizer for RouWei released at this page.


Stabilizer

All-in-one no-default-style finetuned LoRA that makes pretrained anime model looks better as it should be.

The problem:

  • Anime models trained on anime images. Anime images are simple and only contain high level "concepts", often very abstract. There is no backgrounds, details and textures.

  • We want the model only learn high level "concepts". The fact is, the model will learn what it see, not what you want.

  • After seeing 10+M simple abstract anime images, the model will learn that 1) it doesn't need to generate details. Because you (the dataset) never told it to. 2) Instead, it must generate simple images with abstract concepts even it does not understand. This leads to deformed images, aka. "overfitting".

The solution:

  • Train the model with anime and real world images. So it can learn anime "concepts" while still keep natural details and textures in mind, aka. less overfitting.

  • NoobAI did this by mixing some real cosplay images into its dataset. (iirc, it's devs mentioned this somewhere)

  • This LoRA goes further, it was trained on a little bit everything. Architecture, everyday objects, clothing, landscapes, ..., even space images. Also on full natural language.

What can this LoRA do? if you apply it onto the vanilla (pretrained, no style) base model:

  • Less overfitting, less deformed images. Now you can use thousands of built-in style tags (Danbooru, e621 tags), as well as general styles that original SDXL understands, and get a clean and detailed image as it should be. No matter if it's 2D or 3D, abstract or realistic. See comparisons: 1 (artist styles), 2 (general styles)

  • Still maximum creativity. Because of the diverse dataset. You will not get repeating things (faces, backgrounds, etc.). (Comparing to overfitted style LoRAs that give you a default style.)

  • Natural textures and details. The training dataset contains high resolution high quality real world photographs (avg pixels > 3MP, ~1800x1800). Zero AI image. (Comparing to "detailer" trained on Al images, polluted by AI style, shiny smooth plastic surfaces with no texture).

What if I'm using a "merged" base model that has default style? Should be ok. Most "merged" models are just vanilla base model + some style LoRA already merged inside for you. But beware that because you can not change the strength of those merged styles. It may be problematic if you want to stack more LoRA on those models. See more in "How to use" section.

Why not finetune the full base model? I not a gigachad and I don't have millions of training images, so finetuning the entire base model is not necessary.

Why is this LoRA so small? (40MiB vs 200MiB) This is a new arch called DoRA from Nvidia, which is more efficient than traditional LoRA.

Is this a so-called "detailer"? I would say "no". This LoRA adds details that are "naturally should be here" but the model forgot to add. It does not add extra details, like more objects and decorations.

It was trained on real world images. Is this a "realistic" model? Does it affect 2D anime characters? Noop. There is no real human in the dataset. What the model saw is what the model learnt. So this LoRA does not know what is "real human".

Why you recommend NoobAI but dropped NoobAI version of this LoRA? 1). As dataset getting bigger and bigger, it gets more expansive and time consuming to train. 2). I didn't notice downgrade for using illus version on NoobAI.

Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can. This model only published on Civitiai and TensorArt. If you see "me" and this sentence in other platforms, all those are fake and the platform you are using is a pirate platform.

Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.

Have fun.


How to use

Version prefix:

  • illus01 = Trained on Illustrious v0.1. (Recommended)

    • Support all models based on illustrious v0.1, including new illustrious models (v1, v2 etc.), NoobAI (e-pred and v-pred).

  • nbep11 = Trained on NoobAI e-pred v1.1. (Discontinued)

    • Support NoobAI e-pred v1.1 ONLY.

    • Note: NoobAI v-pred is based on e-pred v1.0, not v1.1. Many "NoobAI" models are also based on v1.0.

Make sure this LoRA is the first in your LoRA stack! This LoRA use a new arch called DoRA from Nvidia, which is more efficient than traditional LoRA. However, unlike traditional LoRA which has a static patch weight. The patch weight from DoRA is calculated based on the loaded base model weight (which changes when you loading LoRAs). Therefore, the effect of this LoRA (DoRA) is affected by loading sequence.

Base model:

Recommended: vanilla (pretrained) base models.

  • You have full control of style combination.

  • I personally would recommend NoobAI v1.1.

If you use "merged" base model:

  • Most "merged" models are just vanilla base model + some style LoRA already merged inside for you.

  • However, because you can't change the strength of those merged styles. It will be problematic if you want to stack more LoRA on those models. This is why many merged base models are not "LoRA friendly".

Avoid using base model with AI style:

Many users complained about the compatibility issues with some base models with AI style. So I wrote a detailed explanation about AI styles and what is the problem here:

  • What are Al styles: AI styles are styles that trained on AI images. They are extremely overfitted because model can instantly learned all things from AI images. (You can easily learn and imitate what you just did because you know how to do it already).

  • Pros: Al styles are super stable and easy to use no matter what you prompted. 95% popular base models have AI styles.

  • Cons:

    • AI styles lacks of natural details and textures. So everything feels clean/smooth/shiny like plastic. This is because AI images have less details compare to real world images it learned. If you train your model again on AI images, you will lose more details (think of it as the telephone game).

    • AI styles can suppress almost all effects from other LoRAs (because of overfitting), causing style shifting. See comparison. Top is vanilla NoobAI. Bottom is WAI, which has strong AI style, and this LoRA almost has zero effect even at strength 0.8.

    • Repeating things (faces, hair styles, background objects...).

  • The problem: You can't overlap and add details on top of Al style. If you want to use this LoRA to "fix" AI style smoothness, it won't work. You need to lower the strength of AI style first. But same as all "merged" base models. You can't change the strength of merged AI style.

Old versions:

New version == new stuffs and new attempt. One big advantage of LoRA is that you can always mix different versions in a second.

You can find more info in "Update log". Beware that old versions may have very different effects.

  • Now ~: Natural details and textures, stable prompt understanding and more creativity. Not limited to pure 2D anime style anymore.

    • "c" version (illus01 v1.152~1.185c): "c" stands for "colorful", "creative", sometimes "chaotic". This version contains training images that are very visually striking, e.g.: High contrast. Strong post-effect. Complex lighting condition. Objects, complex pattens everywhere. You will get "visually striking", but less "natural" images. It may affect styles that have soft colors.

  • Illus01 v1.23 / nbep11 0.138 ~: Better anime style with vivid colors.

  • Illus01 v1.3 / nbep11 0.58 ~: Better anime style.


Dataset

latest version or recent versions

~7k images total. Not that big (comparing to gigachads who love to finetune models with millions of images), but not small. And every image is hand-picked by me.

  • Only normal good looking things. No crazy art style that cannot be described. No AI images, no watermarks, etc.

  • Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.

  • All images have natural captions from Google latest LLM.

  • All anime characters are tagged by wd tagger v3 first and then Google LLM.

  • Contains nature, outdoors, indoors, animals, daily objects, many things, except real human.

  • Contains all kinds of brightness conditions. Very dark, very bright, very dark and very bright.


Other tools

Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.

Touching Grass: A LoRA trained on and only on the real world dataset (no anime dataset). Has stronger effect. Better background and lighting. Useful for gigachad users who like pure concepts and like to balance weights themselves.

Dark: A LoRA that can fix the high brightness bias in some base models. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.

Contrast Controller: A handcrafted LoRA. (No joke, it was not from training). The smallest 300KB LoRA you have ever seen. Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, mathematical linear, and has zero side effect on style.

Useful when you base model has oversaturation issue, or you want something really colorful.

Example:

Style Strength Controller: Or overfitting effect reducer. Also a handcrafted LoRA, not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).

Effect test on Hassaku XL: The base model has many biases, e.g high brightness, smooth and shiny surface, printings on wall... The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25, less bias of high brightness, less weird smooth feeling on every surfaces, the image feels more natural.

Differences between Stabilizer:

  • Stabilizer was trained on real world data. It can only "reduce" overfitting effects about texture, details and backgrounds, by adding them back.

  • Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. Can mathematically reduce all overfitting effects, like bias on brightness, objects.


Update log

(7/28/2025) illus01 v1.198

Mainly comparing to v1.185c:

  • End of "c" version. Although "visually striking" is good but it has compatibility issues. E.g. when your base model has similar enhancement for contrast already. Stacking two contrast enhancements is really bad. So, no more crazy post-effects (high contrast and saturation, etc.).

  • Instead, more textures and details. Cinematic level of lighting. Better compatibility.

  • This version changed lots of things, including dataset overhaul, so the effect will be quite different than previous versions.

  • For those who want v1.185c crazy effects back. You can find pure and dedicated art styles in this page. If dataset is big enough for a LoRA, I may train one.

(6/21/2025) illus01 v1.185c:

Comparing to v1.165c.

  • +100% clearness and sharpness. You can get lines at one pixel width. You can even get the texture of a white paper. (No joke, realistic paper is not pure white, it has noise). An 1MP image now feels like 2K.

  • -30% images that are too chaotic (cannot be descripted properly). So you may find that this version can't give you a crazy high contrast level anymore, but should be more stable in normal use cases.

(6/10/2025): illus01 v1.165c

This is a special version. This is not an improvement of v1.164. "c" stands for "colorful", "creative", sometimes "chaotic".

The dataset contains images that are very visually striking, but sometimes hard to describe e.g.: Very colorful. High contrast. Complex lighting condition. Objects, complex pattens everywhere.

So you will get "visually striking", but at cost of "natural". May affect styles that have soft colors, etc. E.g. This version cannot generate "pencil art" texture perfectly like v1.164.

(6/4/2025): illus01 v1.164

  • Better prompt understanding. Now each image has 3 natural captions, from different perspective. Danbooru tags are checked by LLM, only important tags are picked out and fused into the natural caption.

  • Anti-overexpose. Added a bias to prevent model output reaching #ffffff pure white level. Most of the time #ffffff == overexposed, which lost many details.

  • Changed some training settings. Make it more compatible with NoobAI, both e-pred and v-pred.

(5/19/2025): illus01 v1.152

  • Continual to improve lighting and textures and details.

  • 5K more images, more training steps, as a result, stronger effect.

(5/9/2025): nbep11 v0.205:

  • A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.

(5/7/2025): nbep11 v0.198:

  • Added more dark images. Less deformed body, background in dark environment.

  • Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.

(4/25/2025): nbep11 v0.172.

  • Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.

  • Better color accuracy and stability. (Comparing to nbep11 v0.160)

(4/17/2025): illus01 v1.121.

  • Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.

  • Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.

  • Other things just same as below (v1.113).

(4/10/2025): illus11 v1.113 ❌.

  • Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.

  • Trained on Illustrious v1.1.

  • New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.

  • Full natural language captions from LLM.

(3/30/2025): illus01 v1.93.

  • v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.

(3/22/2025): nbep11 v0.160.

  • Same stuffs in illus v1.72.

(3/15/2025): illus01 v1.72

  • Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.

  • Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.

  • Removed all "simple background" images from dataset. -200 images.

  • Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.

(3/4/2025) ani40z v0.4

  • Trained on Animagine XL 4.0 ani40zero.

  • Added ~1k dataset focusing on natural dynamic lighting and real world texture.

  • More natural lighting and natural textures.

ani04 v0.1

  • Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.

illus01 v1.23

nbep11 v0.138

  • Added some furry/non-human/other images to balance the dataset.

nbep11 v0.129

  • bad version, effect is too weak, just ignore it

nbep11 v0.114

  • Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%

  • Added a little bit realistic data. More vivid details, lighting, less flat colors.

illus01 v1.7

nbep11 v0.96

  • More training images.

  • Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.

nbep11 v0.58

  • More images. Change the training parameters as close as to NoobAI base model.

illus01 v1.3

nbep11 v0.30

  • More images.

nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.

  • Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.

illus01 v1.1

  • Trained on illustriousXL v0.1.

nbep10 v0.10

  • Trained on NoobAI epsilon pred v1.0.