Updated: Sep 6, 2025
styleLatest update:
(8/31/2025) illus01 version is discontinued, use nbep10 instead.
The reason why I don't train it on illustrious anymore is that almost all (95%) popular "illustrious" base models are actually based on (or close to) NoobAI, not illustrious.
Read more (why those base models are mislabeled and how to test your base model) in "Discussion".
If you are using illus01, I would highly recommend you test your base model.
If your base model is based on NoobAI. You should use NoobAI LoRA.
(8/31/2025) NoobAI ep10 v0.273
This version is trained from the start on NoobAI eps v1.0.
Comparing to previous illus01 v1.198:
Better and balanced brightness in extreme conditions. (same as nbvp v0.271)
Better textures and details. It has more training steps on high SNR timesteps. (illus01 versions skipped those timesteps for better compatibility. Since now all base models are NoobAI, no need to skip those timesteps.)
Stabilizer
This is not a style LoRA.
This is not a style LoRA.
This is not a style LoRA.
This is my finetuned base model.
This finetuned base model:
Should fix the problem of overfitting in pretrained anime base models. Now you can get better natural textures, lighting, details as it should be, and less deformed images. See "Why and how this works" section below.
Still no default style (bias). The dataset is very diverse. So this model does not have default style. Still maximum creativity. When you stack more style tags/LoRAs, no style overlapping/pollution from this model. You can get the style exactly as it should be.
Zero AI image in dataset. No smooth plastic AI style (No same AI faces. No plastic hair, smooth surfaces, etc...).
The main goal of this base model is to provide a good and clean starting point to stack more style tags in prompt (or style LoRAs), and build the style combination you want, with full control.
But wait...this is a LoRA, not a base model.
That's right. I decided to train this "finetuned base model" as a LoRA. Because:
I'm not a gigachad and don't have millions of training images. Finetuning the whole base model is not necessary, a LoRA is enough.
I can save tons of VRAM so I can use bigger batch size.
I only have to upload, and you only need to download, a tiny 40MiB file, instead of a 7GiB big fat checkpoint, which can save 99.4% data and storage.
So I can spam update it. This LoRA may seem small, it is still powerful. Because it uses a new arch called DoRA from Nvidia, which is more efficient than traditional LoRA.
Cover images are the raw output from the vanilla (pretrained) base model, at default 1MP resolution. No upscale, no plugins, no inpainting fixes. Have metadata, 100% reproducible. Styles are from the pretrained base model, triggered by prompt.
Share merges using this model is prohibited. FYI, there are hidden trigger words to print invisible watermark. I coded the watermark and detector myself. I don't want to use it, but I can.
This model only published on Civitiai and TensorArt. If you see "me" and this sentence in other platforms, all those are fake and the platform you are using is a pirate platform.
Please leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
How to use
Latest versions:
that I'm currently using.
nbvp10 v0.271 (trained on NoobAI v-pred v1.0).
I personally highly recommend v-pred. v-pred does not have so many messy noise schedule hacks, so all v-pred models (base models, LoRAs) have same noise schedule, and you can always get clean details, accurate colors and brightness when adding v-pred LoRAs.
Has my own color/brightness fix. Better and balanced lighting in extreme condition, less bias. High contrast; pure black 0 and white 255 in the same image, even at same place; no overflowing and oversaturation. Now you can have all of them at once. Don't need RescaleCFG.
nbep10 v0.273 (trained on NoobAI eps v1.0).
illus01 v1.198(trained on Illustrious v0.1). Discontinued.
Note: load this LoRA first in your LoRA stack.
This LoRA use a new arch called DoRA from Nvidia, which is more efficient than traditional LoRA. However, unlike traditional LoRA which has a static patch weight. The patch weight from DoRA is calculated based on the currently loaded base model weight (which changes when you loading LoRAs). To minimize the changes, load this LoRA first.
Two ways to use this model:
1). Use it as a finetuned base model:
Recommended, this is why I trained this model, and what it was optimized for. Just apply this LoRA to the pretrained base model with full strength. Then you will get my finetuned base model.
Reminder: This base model does not have default style. Means you need to specify the style you want in prompt or by loading style LoRAs. Otherwise you will get no style or random style.
Why no default style? Because if a model has default style, it means no matter what you prompted, the model must generate same things (faces, backgrounds, feelings) that make up the default style. You can not overwrite it. If you prompt something not fit the default style, the model may just ignore it. If you stack more styles, the default style will always overlap/pollute other styles.
The goal of my base model is to provide a good and clean starting point to stack more style tags in prompt (or style LoRAs), and build the style combination you want, with full control.
2). Use it as a LoRA on other base model.
Because why not, it's a LoRA after all.
But beware, this is not a style LoRA.
You are about to merge two base models. Simply add this LoRA to your base model usually won't give you what you expected. You may need to balance other weights (LoRAs, U-net blocks, etc.).
Things to note:
Important: You need to check which pretrained base model is your base mode based on. This is the most important thing because LoRA needs to match the pretrained base model. And almost all popular "illustrious" base models (95% chance) are based on (or close to) NoobAI, not illustrious. Read more (why those base models are mislabeled and how to test your base model) in "Discussion".
This model can not add natural details to base models with Al styles (trained with AI images, everything feels smooth, shiny, no texture, and looks like plastic). I know some of you chose this model because you want to get rid of the smoothness of AI style in your base model. Unfortunately it won't work, because AI style is extremely overfitted (you can instantly learn what you just did). And because AI images are lacking details than real world images, the model will also learn to suppress details. Once Al style was here, it can not be overlapped.
This model is not a magical tool that let you stack more LoRAs on a base mode without breaking. I know the name of this model can be misleading.
FAQ:
Effect is too weak:
your base model has very strong default style.
mismatched base model. (Did you use illus LoRA on NoobAI base model?)
Effect is way too strong
You can see the maximum effect (strength = 1) of this model from cover images.
The effect from this model may not fit your style at high strength, it's normal. But If at low strength (e.g. 0.5), the effect is still abnormally strong (way stronger than cover images), and it blows up your the image. Your base model may have already merged my model.
There are some base model "creators" who just grab peoples works, merge them, wipe the merge metadata, and say "hi my model is sooooo good".
Old versions:
You can find more info in "Update log". Beware that old versions may have very different effects.
Main timeline:
Now ~: Natural details and textures, stable prompt understanding and more creativity. Not limited to pure 2D anime style anymore.
"c" version (illus01 v1.152~1.185c): "c" stands for "colorful", "creative", sometimes "chaotic". This version contains training images that are very visually striking, e.g.: High contrast. Strong post-effect. Complex lighting condition. Objects, complex pattens everywhere. You will get "visually striking", but less "natural" images. It may affect styles that have soft colors.
Illus01 v1.23 / nbep11 0.138 ~: Better anime style with vivid colors.
Illus01 v1.3 / nbep11 0.58 ~: Better anime style.
Why and how this works:
The problem of overfitting:
Anime models trained on anime images. Anime images are simple and only contain high level "concepts", often very abstract. There is no backgrounds, details and textures.
We want the model only learn high level "concepts". The fact is, the model will learn what it see, not what you want.
After seeing 10+M simple abstract anime images, the model will learn that 1) it doesn't need to generate details. Because you (the dataset) never told it to. 2) Instead, it must generate simple images with abstract concepts even it does not understand. This leads to deformed images, aka. "overfitting".
The solution:
Train the model with anime and real world images. So it can learn concepts while still keep natural details and textures in mind, aka. less overfitting.
NoobAI did this by mixing some real cosplay images into its dataset. (iirc, it's devs mentioned this somewhere)
This model goes further, it was trained on a little bit everything. Architecture, everyday objects, clothing, landscapes, ... . Also on full, multi-level, natural language captions, to mimic the original SDXL training setup.
The result:
See w/o comparisons: 1 (artist styles), 2 (general styles)
Less overfitting, less deformed images. More natural textures, lightings and details. Now you can use thousands of built-in style tags (Danbooru, e621 tags), as well as general styles that original SDXL understands, and get a clean and detailed image as it should be. No matter if it's 2D or 3D, abstract or realistic.
Still maximum creativity. Because of the diverse dataset. There is no default style. So it does not limit the creativity of the pretrained model, as well as other style LoRAs.
Dataset
latest version or recent versions
~7k images total. Not that big (comparing to gigachads who love to finetune models with millions of images), but not small. And every image is hand-picked by me.
Only normal good looking things. No crazy art style that cannot be described. No AI images, no watermarks, etc.
Only high resolution images. The whole dataset avg pixels is 3.37 MP, ~1800x1800.
All images have natural captions from Google latest LLM.
All anime characters are tagged by wd tagger v3 first and then Google LLM.
Contains nature, outdoors, indoors, animals, daily objects, many things, except real human.
Contains all kinds of brightness conditions. Very dark, very bright, very dark and very bright.
Other tools
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: A LoRA trained on and only on the real world dataset (no anime dataset). Has stronger effect. Better background and lighting. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: A LoRA that can fix the high brightness bias in some base models. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Contrast Controller: A handcrafted LoRA. (No joke, it was not from training). The smallest 300KB LoRA you have ever seen. Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, mathematical linear, and has zero side effect on style.
Useful when you base model has oversaturation issue, or you want something really colorful.
Example:
Style Strength Controller: Or overfitting effect reducer. Also a handcrafted LoRA, not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The base model has many biases, e.g high brightness, smooth and shiny surface, printings on wall... The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25, less bias of high brightness, less weird smooth feeling on every surfaces, the image feels more natural.
Differences between Stabilizer:
Stabilizer was trained on real world data. It can only "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. Can mathematically reduce all overfitting effects, like bias on brightness, objects.
Update log
(8/24/2025) NoobAI v-pred v0.271:
New method, i call it "brightness calibrated".
TLDR: Comparing to previous v0.264: This version has much better and balanced lighting in extreme condition, less bias.
High contrast, pure black 0 and white 255 in the same image, even at same place, no overflowing and oversaturation. Now you can have all of them at once.
(old v0.264 will try to cap the image between 10~250 to avoid overflowing, and still has noticeable bias issue, overall image may be too dark or bright)
Same as v0.264, prefer high or full strength (0.9~1).
(8/17/2025) NoobAI v-pred v0.264:
First version trained on NoobAI v-pred.
It gives you better lighting, less overflowing.
Note: prefer high or full strength (0.9~1).
(7/28/2025) illus01 v1.198
Mainly comparing to v1.185c:
End of "c" version. Although "visually striking" is good but it has compatibility issues. E.g. when your base model has similar enhancement for contrast already. Stacking two contrast enhancements is really bad. So, no more crazy post-effects (high contrast and saturation, etc.).
Instead, more textures and details. Cinematic level of lighting. Better compatibility.
This version changed lots of things, including dataset overhaul, so the effect will be quite different than previous versions.
For those who want v1.185c crazy effects back. You can find pure and dedicated art styles in this page. If dataset is big enough for a LoRA, I may train one.
(6/21/2025) illus01 v1.185c:
Comparing to v1.165c.
+100% clearness and sharpness. You can get lines at one pixel width. You can even get the texture of a white paper. (No joke, realistic paper is not pure white, it has noise). An 1MP image now feels like 2K.
-30% images that are too chaotic (cannot be descripted properly). So you may find that this version can't give you a crazy high contrast level anymore, but should be more stable in normal use cases.
(6/10/2025): illus01 v1.165c
This is a special version. This is not an improvement of v1.164. "c" stands for "colorful", "creative", sometimes "chaotic".
The dataset contains images that are very visually striking, but sometimes hard to describe e.g.: Very colorful. High contrast. Complex lighting condition. Objects, complex pattens everywhere.
So you will get "visually striking", but at cost of "natural". May affect styles that have soft colors, etc. E.g. This version cannot generate "pencil art" texture perfectly like v1.164.
(6/4/2025): illus01 v1.164
Better prompt understanding. Now each image has 3 natural captions, from different perspective. Danbooru tags are checked by LLM, only important tags are picked out and fused into the natural caption.
Anti-overexpose. Added a bias to prevent model output reaching #ffffff pure white level. Most of the time #ffffff == overexposed, which lost many details.
Changed some training settings. Make it more compatible with NoobAI, both e-pred and v-pred.
(5/19/2025): illus01 v1.152
Continual to improve lighting and textures and details.
5K more images, more training steps, as a result, stronger effect.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113 ❌.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture.
More natural lighting and natural textures.
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real game wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.