Sign In

Stabilizer IL/NAI

5.2k
62.9k
2.4m
1.8k
Updated: Jun 1, 2025
styleanimestyles
Verified:
SafeTensor
Type
LoRA
Stats
5,613
205,047
47.8k
Reviews
Published
Apr 17, 2025
Base Model
Illustrious
Usage Tips
Strength: 0.5
Hash
AutoV2
D8FC4B0E9E

Sharing merges using this LoRA, re-printing it to other platforms, are prohibited.

All-in-one finetuned LoRA. Better lighting, colors and details. Only contains high resolution images. Zero AI image. So you can get real natural texture and details beyond pixel level.

Cover images are directly from the vanilla (the original, not finetuned) base model in a1111, at 1MP resolution. No upscale, no face/hands inpaint fixes, even no negative prompt. You can drop and reproduce those images in a1111. They have metadata.


Recent updates:

(5/29/2025): Future plan

The dataset is getting bigger and bigger and it's difficult and expensive to train. So

  • NoobAI version will not update frequently.

  • If you like my model, and want to support the training, you can support me at here: https://app.unifans.io/c/fc05f3e2c72cb3f5

  • Supporters will receive early test versions. Be the first to test and give ideas and feedback on model quality during training.

  • Final version will be released as it should be. I will not delay the release date on purpose.

(5/19/2025): illus01 v1.152

  • Continual to improve lighting and textures and details.

  • More images, more training steps, as a result, stronger effect.

FAQ:

  • If you want 100% effects of texture, avoid base models with AI style (trained on AI images). Because AI styles are super overfitted style, and will overlap the texture instantly. FYI. Cover images are from vanilla base model. And I only use vanilla models + artist style LoRAs.

  • How to know if it is AI style. No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.

  • If you got realistic faces on anime characters. Don't blame this LoRA. What it saw is what it learned. There is zero real human in dataset, so it has zero knowledge of realistic faces. Check whether your base model was merged with other realistic model.

.........

See more in the update log section.

.........

(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.


Stabilizer

It's an all-in-one finetuned LoRA. If you apply it to NoobAI v1.1, then you will get my personal "finetuned" base model.

It focuses on natural lighting, colors, and details, and won't dramatically change the image composition. It also does not have art style, so it has compatibility with artist style LoRAs.

The dataset only contains high resolution images. Zero AI image. So you can get real natural texture and details beyond pixel level.

Why all-in-one? Because if you train 10 LoRAs with 10 different datasets for different aspects, and stack them up, your base model will blow up. If you train those datasets in one go, there will be no conflicts.

Why not finetune the full base model? Unless you have millions of training images, finetuning the entire base model is not necessary. Fun fact: Most (95%) base models out here just merges of merges of merges of tons of LoRAs... Only very few base models are truly full finetuned models trained by truly gigachad creators.

Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.

Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.

Have fun.


How to use

Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).

Because it's all-in-one, it is recommended to lower the strength or remove LoRAs that are for lighting (may cause over saturation), hand fix (don't need, this LoRA has hand improvements), and details (conflicts).

Version prefix:

  • illus01 = Trained on Illustrious v0.1. (also works on NoobAI)

  • nbep11 = Trained on NoobAI e-pred v1.1

Which version to use?

Hard to tell. You should try both version. Or, just use both, with low strength each, many users reported this has noticeable better result.

Why? Because models nowadays are just merges of merges and merges. You would never know what's truly inside your base model. Most model creators don't know either. Fun fact (5/10/2025): 90% models that labeled as "illustrious" are actually NoobAi, if you calculate their weight similarities.


Dataset (latest version)

~7k images total. Every image is hand-picked by me.

  • Only normal good looking things. No crazy art style. No AI images, no watermarks, etc.

  • Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.

  • All images have natural captions from Google latest LLM.

  • All anime characters are tagged by wd tagger v3 first and then Google LLM.

2 main sub datasets:

  • anime dataset: ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.

  • photographs dataset: 6K images. Contains nature, indoors, animals, buildings...many things, except human.

    • I named this sub dataset Touching Grass. There is a LoRA that was only trained on this sub dataset. If you want something pure.

Other small datasets:

  • wallpapers from games.

  • hand(s) with different tasks, like holding a cup or something.

  • clothes.

  • ...


Other tools

Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.

Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.

Dark: It can fix the high brightness bias in anime models. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.

Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).

Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.

Differences between Stabilizer:

  • Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.

  • Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.


Older versions

New version == new stuffs and new attempt != better version for you base model.

You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.


Update log

(5/19/2025): illus01 v1.152

  • Continual to improve lighting and textures and details.

  • More images, more training steps, as a result, stronger effect.

(5/9/2025): nbep11 v0.205:

  • A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.

(5/7/2025): nbep11 v0.198:

  • Added more dark images. Less deformed body, background in dark environment.

  • Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.

(4/25/2025): nbep11 v0.172.

  • Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.

  • Better color accuracy and stability. (Comparing to nbep11 v0.160)

(4/17/2025): illus01 v1.121.

  • Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.

  • Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.

  • Other things just same as below (v1.113).

(4/10/2025): illus11 v1.113 ❌.

  • Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.

  • Trained on Illustrious v1.1.

  • New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.

  • Full natural language captions from LLM.

(3/30/2025): illus01 v1.93.

  • v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.

(3/22/2025): nbep11 v0.160.

  • Same stuffs in illus v1.72.

(3/15/2025): illus01 v1.72

  • Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.

  • Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.

  • Removed all "simple background" images from dataset. -200 images.

  • Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.

(3/4/2025) ani40z v0.4

  • Trained on Animagine XL 4.0 ani40zero.

  • Added ~1k dataset focusing on natural dynamic lighting and real world texture.

  • More natural lighting and natural textures.


Above: Added more real world images. More natural texture and details.


ani04 v0.1

  • Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.

illus01 v1.23

nbep11 v0.138

  • Added some furry/non-human/other images to balance the dataset.

nbep11 v0.129

  • bad version, effect is too weak, just ignore it

nbep11 v0.114

  • Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%

  • Added a little bit realistic data. More vivid details, lighting, less flat colors.

illus01 v1.7

nbep11 v0.96

  • More training images.

  • Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.


Above: Has a weak default style.


nbep11 v0.58

  • More images. Change the training parameters as close as to NoobAI base model.

illus01 v1.3

nbep11 v0.30

  • More images.

nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.

  • Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.

illus01 v1.1

  • Trained on illustriousXL v0.1.

nbep10 v0.10

  • Trained on NoobAI epsilon pred v1.0.