Type | |
Stats | 1,491 12,055 2.5k |
Reviews | (248) |
Published | May 20, 2025 |
Base Model | |
Usage Tips | Strength: 0.3 |
Hash | AutoV2 CCEFA8AA0E |
Trained on a normal and natural dataset. For natural lighting, textures, details...
The dataset only has high resolution images. Zero AI image. So you can get texture and details beyond pixel level. Instead of a weird smooth plastic feeling.
FYI, all cover images directly come from vanilla base model and a1111, no upscale, no inpainting. You can drop the images into a1111 to reproduce yourself, they have metadata.
Sharing merges using this LoRA, re-printing it to other platforms, are prohibited.
Recent updates:
(5/19/2025): illus v1.152
5K more photographs. Still, contains everything, except human.
Covering all lighting conditions as much as possible. All images now have natural captions from Google latest LLM. All lighting (brightness, color temperature, etc.) conditions are properly tagged. All anime characters are tagged by wd tagger v3 and Google LLM.
More data, so more training steps, as a result, stronger effect.
Now the it can handle complex lighting environments.
RTX ON
FAQ:
If you want 100% effects of texture, avoid ANY AI style (trained on AI images). Which literally means 90% popular base models. Because what AI styles do is eliminating all textures. Because AI images don't have those. AI style can amplify noise to generate more fake objects, but it can never generate textures. FYI. Cover images are from vanilla base model. And I only use vanilla models.
How to know if it is AI style. No good method. Personally I look at hair. The more plastic it feels, the more AI style it may have.
If you got realistic faces on anime characters. Don't blame this LoRA, it has zero knowledge of realistic faces. Check whether your base model was merged with other realistic model.
When illustrious v1.0 or v2.0?: Never, they are untrainable.
When NoobAI v-pred?" Maybe also never. Not a fan of v-pred. I believe e-pred and do the same but more stable.
.........
See more in the update log section.
.........
(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.
Stabilizer
Just a personal fun coding project. Trained on ~2k normal and natural images.
The goal of this LoRA is just to make the image look better, better contrast, lighting, character details... It doesn't focus on unique art style.
It has relatively weaker effects than other LoRA, and won't dramatically change the image composition.
Cover images are the direct outputs from the vanilla (not finetuned) base model in a1111-sd-webui, no inpaint fixes, even no negative prompt. They demonstrate the effect of the LoRA, not clickbait.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
How to use
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Version prefix:
illus01 = Trained on Illustrious v0.1.
nbep11 = Trained on NoobAI e-pred v1.1
Which version to use?
Hard to tell. You should try both version. Models nowadays are just merges of merges and merges. You would never know what's truly inside your base model.
You can also just use both, with low strength each, many users reported this has better result on some models.
Dataset
Only normal good looking things. No crazy art style. Every image is hand-picked by me. No AI images, no watermarks, etc.
There are 2 main dataset in the latest version:
a 2D/anime dataset with ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.
a real world photographs dataset with ~1k images. Contains nature, indoors, animals, buildings...many things, except human.
Why real world images? You can get better background, lighting, pixel level details/textures. There is no human in dataset so it won't affect characters.
I named the dataset Touching Grass. There is also a LoRA that was only trained on this photograph dataset. If you want something pure.
But I got realistic faces on my anime characters.
Well, don't blame this LoRA, it has zero knowledge of realistic faces. Most likely your base model was mixed with other realistic models.
Other tools
Some ideas that was going to, or used to, be part of the Stabilizer. Now they are separated LoRAs. For better flexibility. Collection link: https://civitai.com/collections/8274233.
Touching Grass: Trained on and only on the photographs dataset (No anime dataset). Has stronger effect. Useful for gigachad users who like pure concepts and like to balance weights themselves.
Dark: Lower the brightness, add more details and lighting effect. Trained on low brightness images in the Touching Grass dataset. Also, no human in dataset. So does not affect style.
Example on WAI v13.
Contrast Controller: Control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. (Not an exaggeration, it's really mathematically zero and linear. It was not from training.) Example on WAI v13.
Style Strength Controller: Or overfitting effect reducer. Also not from training, so zero side effect on style and mathematically linear effects. Can reduce all kinds of overfitting effects (bias on objects, brightness, etc.).
Effect test on Hassaku XL: The prompt has keyword "dark", but the model almost ignored it. Notice that: at strength 0.25 this LoRA reduces the bias of high brightness, and a weird smooth feeling on every surfaces, so the image feels more natural.
Differences between Stabilizer:
Stabilizer affects style. Because it was trained on real world data. It can "reduce" overfitting effects about texture, details and backgrounds, by adding them back.
Style Controller was not from training. It is more like "undo" the training for base model, so it will less-overfitted. It does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.
Older versions
New version == new stuffs and new attempt != better version for you base model.
You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.
Update log
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198. Now it should not change brightness and colors so dramatically like a real photograph. v0.198 isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Less deformed body, background in dark environment.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead.
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture. More natural lighting and natural textures.
Above: Added more real world images. More natural texture and details..
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
Above: Has a weak default style.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.