Sign In

Stabilizer IL/NAI

5.4k
65.7k
2.5m
1.8k
Updated: Jun 7, 2025
styleanimestyles
Verified:
SafeTensor
Type
LoRA
Stats
2,638
285,327
82.8k
Reviews
Published
Jan 10, 2025
Base Model
NoobAI
Usage Tips
Strength: 0.5
Hash
AutoV2
A22BD4A285

Sharing merges using this LoRA, re-printing it to other platforms, are prohibited. This model only published on Civitiai and TensorArt. If you see "me" and this sentence in other platforms, all those are fake and the platform you are using is a thief.

Cover images are directly from the vanilla (the original, not finetuned) base model in a1111, at 1MP resolution. No upscale, no face/hands inpaint fixes, even no negative prompt. You can drop and reproduce those images in a1111. They have metadata.


Recent updates:

(6/4/2025): Future plan

The dataset is getting bigger and bigger and it's difficult and expensive to train. illus01 v1.164 took almost 35 hours. So

  • NoobAI version will not update frequently. I decided to put my main time to improve illustrious v0.1 versions, which supports all NoobAI and later illustrious versions (v1, v2...).

  • I opened a donation page. If you like my model, and want to support me training on bigger cloud GPU, you can directly support me at here: https://app.unifans.io/c/fc05f3e2c72cb3f5

(6/4/2025): illus01 v1.164

  • Better prompt understanding. Thanks to Google LLM, now each image has 3 natural captions, from different perspective. Danbooru tags are checked by LLM, only important tags are picked out and fused into the natural caption.

  • Better lighting and textures and details.

  • Added a bias to prevent model output reaching #ffffff pure white level. (Most of the time #ffffff == overexposed, which lost many details)

  • Changed some training settings. Make it more compatible with NoobAI, both e-pred and v-pred.

Advice about contrast:

Recent versions removed contrast/color saturation enhancement to make it more stable and compatible. There are many "enhancement" LoRAs for different aspects. I think it's important that users can choose the one they like.

If you want high contrast and saturation in the old version, you can stack nbep11 v0.160 / illus01 1.93 (has aggressive color enhancement).

If you think the characters are too realistic, you can stack nbep v0.58 / illus01 1.3 (pure 2d anime data).

This is not a workaround, I use old versions all the time. My settings on vanilla NoobAI v1.1: <latest version> + 0.5 x nbep11 v0.160 + 0.6 x nbep11 v0.58

Example: https://civitai.com/images/80772481

.........

See more in the update log section.

.........

(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.


Stabilizer

It's an all-in-one finetuned LoRA. If you apply it to NoobAI v1.1, then you will get my personal "finetuned" base model.

This finetuned LoRA focuses on natural lighting and details, reducing overfitting effects, stable prompt understanding and more creativity. It is not a overfitted style LoRA, and it does not have default art style, so it has good compatibility with artist style/character tags/LoRAs, you can get a clean and stable style as it should be.

The dataset only contains high resolution images. Zero AI image. So you can get real texture and details beyond pixel level, instead of fake objects and sharpness.

Why all-in-one? Because if you train 10 LoRAs with 10 different datasets for different aspects, and stack them up, your base model will blow up. If you train those datasets in one go, there will be no conflicts.

Why not finetune the full base model? Unless you have millions of training images, finetuning the entire base model is not necessary. Fun fact: Most (95%) base models out here just merges of merges of merges of tons of LoRAs... Only very few base models are truly fully finetuned models trained by truly gigachad creators.

Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.

Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.

Have fun.


How to use

Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).

Because it's all-in-one, it is recommended to lower the strength or remove LoRAs that are for

  • hand fix (don't need, this LoRA has hand improvements)

  • lighting, color improvement (may cause over saturation, because they may have "noise offset"),

  • detailer (may have conflicts, cause burned image, color blobs).

Version prefix:

  • illus01 = Trained on Illustrious v0.1. (also works on NoobAI at low strength)

  • nbep11 = Trained on NoobAI e-pred v1.1

Which version to use, illus01 or nbep11?

Hard to tell. You should try both version. Or, just use both, with low strength each, many users reported this has noticeable better result.

Because models nowadays are just merges of merges and merges. It is all a mess. You would never know what's truly inside your base model. 90% models that labeled as "illustrious" are actually "mainly" NoobAi, if you calculate their weight similarities. However, even if a model is "mainly" NoobAI, it's status still "behind" the final NoobAI version, (but ahead of illustrious). So illus version LoRA sometimes has better effect than nbep version.

Note: merging base models that have huge training gap are always problematic to all LoRAs, not just this LoRA. If you love using LoRAs and want to avoid such mess, just avoid such kind of merged base model.


FAQ

I can't get the texture effect like your cover images.

  • If you want 100% effects of texture, avoid base models/LoRAs with AI style (trained on AI images). Because AI styles are super overfitted style, and will overlap the texture instantly. FYI. Cover images are from vanilla base model.

How to know if it is AI style?

  • No good method. Personally I look at hair (or other surfaces). The more plastic it feels (no texture, weird shiny reflections), the more AI style it may have.

I got oversaturated image.

  • Your base model (or some of your LoRAs) was trained with "noise offset". I would recommend not use such model, as "noise offset" is a trick at training time to enhance extreme colors. It's very handy but problematic. This LoRA has native full color range dataset. If there is additional noise offset, it cause the output image "overshot".

I got realistic faces on my anime characters.

  • This LoRA has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models. This includes NoobAI. NoobAI was trained with many realistic images (according to their update logs, mainly cosplays) to reducing overfitting.

  • It sounds counter-intuitive, but this is a good thing. It means the model can tolerate more style strength without braking.

  • If this happens, just higher other style LoRA to overlap the realistic effects. Or lower the strength of this LoRA.


Dataset (latest version)

~7k images total. Every image is hand-picked by me.

  • Only normal good looking things. No crazy art style. No AI images, no watermarks, etc.

  • Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.

  • All images have natural captions from Google latest LLM.

  • All anime characters are tagged by wd tagger v3 first and then Google LLM.

2 main sub datasets:

  • anime dataset: ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.

  • photographs dataset: 5K images. Contains nature, indoors, animals, buildings...many things, except human.

    • I named this sub dataset Touching Grass. There is a LoRA that was only trained on this sub dataset. If you want something pure.

Other small datasets:

  • wallpapers from games.

  • hand(s) with different tasks, like holding a cup or something.

  • clothes.

  • ...


Other tools

Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.

Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.

Dark: It can fix the high brightness bias in anime models. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.

Example on WAI v13.

Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.

Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).

Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.

Differences between Stabilizer:

  • Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.

  • Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.


Older versions

New version == new stuffs and new attempt.

It's ok to use different versions together just like mixing base models. You can check the "Update log" section to find old versions.

Here are the most distinctive old versions:

  • nbep11 v0.160 / illus01 v1.93: Has color enhancement. High contrast and saturation. Note: Later version removed this enhancement because there are many awesome enhancement LoRAs for different aspects. I think it's important that users can choose the one they like. I still use this version all the time.

  • nbep11 v0.138 / illus01 v1.23: Weaker effect, but maximum compatibility.

  • nbep11 v0.58 / illus01 v1.3: Pure 2D images. Useful if you are using a 2.5D model and you think characters are a little bit too realistic.

FYI: I use old versions all the time. My settings on vanilla NoobAI v1.1: <latest version> + 0.5 x nbep11 v0.160 + 0.5 x nbep11 v0.58

Example: https://civitai.com/images/80772481


Update log

(5/19/2025): illus01 v1.152

  • Continual to improve lighting and textures and details.

  • 5K more images, more training steps, as a result, stronger effect.

(5/9/2025): nbep11 v0.205:

  • A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.

(5/7/2025): nbep11 v0.198:

  • Added more dark images. Less deformed body, background in dark environment.

  • Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.

(4/25/2025): nbep11 v0.172.

  • Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.

  • Better color accuracy and stability. (Comparing to nbep11 v0.160)

(4/17/2025): illus01 v1.121.

  • Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.

  • Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.

  • Other things just same as below (v1.113).

(4/10/2025): illus11 v1.113 ❌.

  • Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.

  • Trained on Illustrious v1.1.

  • New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.

  • Full natural language captions from LLM.

(3/30/2025): illus01 v1.93.

  • v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.

(3/22/2025): nbep11 v0.160.

  • Same stuffs in illus v1.72.

(3/15/2025): illus01 v1.72

  • Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.

  • Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.

  • Removed all "simple background" images from dataset. -200 images.

  • Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.

(3/4/2025) ani40z v0.4

  • Trained on Animagine XL 4.0 ani40zero.

  • Added ~1k dataset focusing on natural dynamic lighting and real world texture.

  • More natural lighting and natural textures.


BIG CHANGES: Added more real world images. More natural texture and details.


ani04 v0.1

  • Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.

illus01 v1.23

nbep11 v0.138

  • Added some furry/non-human/other images to balance the dataset.

nbep11 v0.129

  • bad version, effect is too weak, just ignore it

nbep11 v0.114

  • Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%

  • Added a little bit realistic data. More vivid details, lighting, less flat colors.

illus01 v1.7

nbep11 v0.96

  • More training images.

  • Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.


BIG CHANGES: Has a weak style, mainly from wallpapers from popular games.


nbep11 v0.58

  • More images. Change the training parameters as close as to NoobAI base model.

illus01 v1.3

nbep11 v0.30

  • More images.

nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.

  • Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.

illus01 v1.1

  • Trained on illustriousXL v0.1.

nbep10 v0.10

  • Trained on NoobAI epsilon pred v1.0.