Updated: Sep 13, 2025
styleLatest update:
(9/7/2025) You can find me at TenserArt.
Stabilizer
This is my finetuned base model, but in a LoRA.
This finetuned base model:
Focus on creativity, rather than a fixed style. The dataset is very diverse. So this model does not have default style (bias) that limits its creativity.
Only natural textures, lighting, and finest details. No plastic AI style. (Same AI faces, hair style, smooth surfaces without texture, etc...). I handpicked every images in the dataset. Zero AI image in dataset.
Less deformed images. More logical. Better background and composition.
Better prompt comprehension. Trained with natural language captions.
Cover images are the raw outputs, at default 1MP resolution. No upscale, no plugins, no inpainting fixes. Have metadata, 100% reproducible.
Styles in cover images are from the pretrained base model, triggered by prompt. They are not in my dataset. You can see that the pretrained model knows those styles, but just can't properly generate them because it overfitted to anime data. This model fixed the overfitting problem. See "how and why it works" section below.
Why no default style?
What is "default style": If a model has a default style (bias), it means no matter what you prompted, the model must generate the same things (faces, backgrounds, feelings) that make up the default style.
Pros: It is easy to use, you won't have to prompt style anymore.
Cons: But you can not overwrite it either. If you prompt something that does not fit the default style, the model will just ignore it. If you stack more styles, the default style will always overlap/pollute/limit other styles.
"no default style" means no bias, and you need to specify the style you want, by tags or LoRAs. But there will be no style overlapping/pollution from this model. You can get the style you stacked exactly as it should be.
Why is this "finetuned base model" a LoRA?
I'm not a gigachad and don't have millions of training images. Finetuning the whole base model is not necessary, a LoRA is enough.
I can save tons of VRAM so I can use bigger batch size.
I only have to upload, and you only need to download, a tiny 40MiB file, instead of a 7GiB big fat checkpoint, which can save 99.4% data and storage.
So I can spam update it.This LoRA may seem small, it is still powerful. Because it uses a new arch called DoRA from Nvidia, which is more efficient than traditional LoRA.
Then how do I get this "finetuned base model"?
Simple.
pretrained base model + This LoRA = the "finetuned base model"
Just load this LoRA on the pretrained base model with full strength. Then the pretrained base model will become the finetuned base model. See below "How to use".
Share merges using this model is prohibited. FYI, there are hidden trigger words to print invisible watermark. I coded the watermark and detector myself. I don't want to use it, but I can.
This model only published on Civitiai and TensorArt. If you see "me" and this sentence in other platforms, all those are fake and the platform you are using is a pirate platform.
Please leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
How to use
Latest versions:
nbvp10 v0.271 (trained on NoobAI v-pred v1.0).
Accurate colors and finest details. This is the best model so far.
nbep10 v0.273 (trained on NoobAI eps v1.0). Discontinued.
Less saturation and contrast comparing to v-pred models. Due to a "small design flaw" in standard epsilon (eps) prediction. It limits the model reaching broader color range. That's why we have v-pred later.
illus01 v1.198 (trained on Illustrious v0.1). Discontinued.
Just too old...
Note: load this LoRA first in your LoRA stack.
This LoRA use a new arch called DoRA from Nvidia, which is more efficient than traditional LoRA. However, unlike traditional LoRA which has a static patch weight. The patch weight from DoRA is dynamically calculated based on the currently loaded base model weight (which changes when you loading LoRAs). To minimize the unexpected changes, load this LoRA first.
Two ways to use this model:
1). Use it as a finetuned base model (Recommended):
If you want the finest and natural details and build the style combination you want, with full control.
Just load this LoRA first on the pretrained base model with full strength. Then the pretrained base model will become to the finetuned base model.
2). Use it as a LoRA on other finetuned base model.
Because why not, it's a LoRA after all.
Things to note:
Important: If you are using s "illustrious" base model. You need to check which pretrained base model is your base mode actually based on. And most popular "illustrious" anime base models are based on (or close to) NoobAI, not illustrious. Read more (why those base models are mislabeled and how to test your base model) in "Discussion". LoRA needs to match the pretrained base model. Mismatching base model will downgrade the image quality.
You are about to merge two base models. If your base model already has a very strong default style. Simply add this LoRA to your base model usually won't give you what you expected. You may need to balance other weights (LoRAs, U-net blocks, etc.).
This model can not add natural details to base models with Al styles (trained with AI images, you can feel that everything is smooth, shiny, has no texture, and looks like plastic). I know some of you chose this model because you want to get rid of the smoothness of AI style in your current base model. Unfortunately it won't work, because AI style is extremely overfitted (you can instantly learn what you just did. same as the AI model if you train it with AI images). And because AI images are lacking details than real world images, the model also learned to suppress details. Which is really problematic. Once Al style was here, you can't get rid of it.
This model is not a magical tool that let you stack more LoRAs on a base mode without breaking. I know the name of this model can be misleading.
Why and how this works:
The problem of overfitting:
Anime models trained on anime images. Anime images are simple and only contain high level "concepts", often very abstract. There is no backgrounds, details and textures.
We want the model only learn high level "concepts". The fact is, the model will learn what it see, not what you want.
After seeing 10+M simple abstract anime images, the model will learn that 1) it doesn't need to generate details. Because you (the dataset) never told it to. 2) Instead, it must generate simple images with abstract concepts even it does not understand. This leads to deformed images, aka. "overfitting".
The solution:
Train the model with anime and real world images. So it can learn concepts while still keep natural details and textures in mind, aka. less overfitting.
NoobAI did this by mixing some real cosplay images into its dataset. (iirc, it's devs mentioned this somewhere)
This model goes further, it was trained on a little bit everything. Architecture, everyday objects, clothing, landscapes, ... . Also on full, multi-level, natural language captions, to mimic the original SDXL training setup.
The result:
See w/o comparisons: 1 (artist styles), 2 (general styles)
Less overfitting, less deformed images. More natural textures, lightings and details. Now you can use thousands of built-in style tags (Danbooru, e621 tags), as well as general styles that original SDXL understands, and get a clean and detailed image as it should be. No matter if it's 2D or 3D, abstract or realistic.
Still maximum creativity. Because of the diverse dataset. There is no default style. So it does not limit the creativity of the pretrained model, as well as other style LoRAs.
Dataset
latest version or recent versions
~7k images total. Not that big (comparing to gigachads who love to finetune models with millions of images), but not small. And every image is hand-picked by me.
Only normal good looking things. No crazy art style that cannot be described. No AI images, no watermarks, etc.
Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.
All images have natural captions from Google latest LLM.
All anime characters are tagged by wd tagger v3 first and then Google LLM.
Contains nature, outdoors, indoors, animals, daily objects, many things, except real human.
Contains all kinds of brightness conditions. Very dark, very bright, very dark and very bright.
Other tools
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Dark: A LoRA that is bias to darker environment. Useful to fix the high brightness bias in some base models. Trained on low brightness images. No style bias, so no style pollution.
Contrast Controller: A handcrafted LoRA. Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, mathematical linear, and has zero side effect on style.
Useful when you base model has oversaturation issue, or you want something really colorful.
Example:
Style Strength Controller: Or overfitting effect reducer. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.), mathematically. Or amplify it, if you want.
Differences between Stabilizer:
Stabilizer was trained on real world data. It can only "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. Can mathematically reduce all overfitting effects, like bias on brightness, objects.
Old versions:
You can find more info in "Update log". Beware that old versions may have very different effects.
Main timeline:
Now ~: Natural details and textures, stable prompt understanding and more creativity. Not limited to pure 2D anime style anymore.
illus01 v1.23 / nbep11 0.138 ~: Better anime style with vivid colors.
illus01 v1.3 / nbep11 0.58 ~: Better anime style.
Update log
(8/31/2025) NoobAI ep10 v0.273
This version is trained from the start on NoobAI eps v1.0.
Comparing to previous illus01 v1.198:
Better and balanced brightness in extreme conditions. (same as nbvp v0.271)
Better textures and details. It has more training steps on high SNR timesteps. (illus01 versions skipped those timesteps for better compatibility. Since now all base models are NoobAI, no need to skip those timesteps.)
(8/24/2025) NoobAI v-pred v0.271:
Comparing to previous v0.264:
Better and balanced lighting in extreme condition, less bias.
High contrast, pure black 0 and white 255 in the same image, even at same place, no overflowing and oversaturation. Now you can have all of them at once.
(old v0.264 will try to cap the image between 10~250 to avoid overflowing, and still has noticeable bias issue, overall image may be too dark or bright)
Same as v0.264, prefer high or full strength (0.9~1).
(8/17/2025) NoobAI v-pred v0.264:
First version trained on NoobAI v-pred.
It gives you better lighting, less overflowing.
Note: prefer high or full strength (0.9~1).
(7/28/2025) illus01 v1.198
Mainly comparing to v1.185c:
End of "c" version. Although "visually striking" is good but it has compatibility issues. E.g. when your base model has similar enhancement for contrast already. Stacking two contrast enhancements is really bad. So, no more crazy post-effects (high contrast and saturation, etc.).
Instead, more textures and details. Cinematic level of lighting. Better compatibility.
This version changed lots of things, including dataset overhaul, so the effect will be quite different than previous versions.
For those who want v1.185c crazy effects back. You can find pure and dedicated art styles in this page. If dataset is big enough for a LoRA, I may train one.
(6/21/2025) illus01 v1.185c:
Comparing to v1.165c.
+100% clearness and sharpness.
-30% images that are too chaotic (cannot be descripted properly). So you may find that this version can't give you a crazy high contrast level anymore, but should be more stable in normal use cases.
(6/10/2025): illus01 v1.165c
This is a special version. This is not an improvement of v1.164. "c" stands for "colorful", "creative", sometimes "chaotic".
The dataset contains images that are very visually striking, but sometimes hard to describe e.g.: Very colorful. High contrast. Complex lighting condition. Objects, complex pattens everywhere.
So you will get "visually striking", but at cost of "natural". May affect styles that have soft colors, etc. E.g. This version cannot generate "pencil art" texture perfectly like v1.164.
(6/4/2025): illus01 v1.164
Better prompt understanding. Now each image has 3 natural captions, from different perspective. Danbooru tags are checked by LLM, only important tags are picked out and fused into the natural caption.
Anti-overexpose. Added a bias to prevent model output reaching #ffffff pure white level. Most of the time #ffffff == overexposed, which lost many details.
Changed some training settings. Make it more compatible with NoobAI, both e-pred and v-pred.
(5/19/2025): illus01 v1.152
Continual to improve lighting and textures and details.
5K more images, more training steps, as a result, stronger effect.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113 ❌.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture.
More natural lighting and natural textures.
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.