Type | |
Stats | 1,068 3,411 520 |
Reviews | (166) |
Published | May 9, 2025 |
Base Model | |
Usage Tips | Strength: 0.5 |
Hash | AutoV2 2537F78755 |
Just a personal fun coding project. Trained on a normal and natural dataset.
The dataset only has high resolution images. Zero AI image. So you can get texture and details beyond pixel level. FYI, all cover images directly come from a1111, no upscale, no inpainting. If you think they are, then this LoRA did its job well.
Recent updates:
(5/7/2025) Note: 90% "illustrious" models are actually NoobAI and you should use NoobAI LoRA instead.
I was also surprised. I tested 13 popular models labeled as illustrious, using some tags that only NoobAI knows (new characters from new games after illustrious was trained).
1 realistic model does not respond to those tags.
Other 12 models know those tags VERY well and output almost same images just like NoobAI.
So they are actually NoobAI, or mainly (>80% merged). And should use NoobAI LoRA as well. Remainer: There is a huge training gap of millions of images and steps between illustrious and NoobAI. So using illus LoRA on NoobAI is doable but not ideal.
You can use this "1girl,ellen joe,red eyes,upper body,masterpiece" to test your base model. "ellen joe" is a new characters. If your base model is firmly based on illustrious, it will output a random person, otherwise, your base model is NoobAI.
I will also stop training illus version.
(5/9/2025): nbep11 v0.205:
A quick fix of brightness and color issues in v0.198 in dark environments. Now it should not change brightness and colors so dramatically. It isn't bad, just creative, but too creative.
(5/7/2025): nbep11 v0.198:
Added more dark images. Note: It won't make your image darker. It didn't train on high timesteps. If you want to "activate" this part of the dataset, prompt something dark or use the Dark LoRA.
Removed color and contrast enhancement. Because it's not needed anymore. Use Contrast Controller instead. See below.
(4/30/2025): New utility LoRAs in Stabilizer collection.
These two LoRAs are not from training. So they can do magical things normal LoRAs can't do. For example, ZERO side effect on style (not an exaggeration, it's really mathematically zero).
Contrast Controller: Now you can control the contrast like using a slider in your monitor. Unlike other trained "contrast enhancer", the effect of this LoRA is stable, linear, and has zero side effect on style. No more low contrast in illustrious.
Style Strength Controller: The mathematically true stabilizer you guys are asking for. Now you can reduce ALL overfitting effects (bias, etc.) with zero side effect on style.
Effect test on Hassaku XL: Notice that model no longer has bias of high brightness and feels more natural at strength 0.25. (Strength < 0 amplifies the style. > 0 reduce the style).
Differences between Stabilizer:
Stabilizer affects style. Because it was trained on real world data. It can mainly reduce overfitting effects about texture, details and backgrounds.
Style Controller was not from training. It is more like "undo" the training. So it does not affect style. And can reduce all overfitting effects, like bias on brightness, objects.
.........
See more in the update log section.
.........
(3/2/2025): You can find the REAL me at TensorArt now. I've reclaimed all my models that were duped and faked by other accounts.
Stabilizer
Just a personal fun coding project. Trained on ~2k normal and natural images.
The goal of this LoRA is just to make the image look better, better contrast, lighting, character details... It doesn't focus on unique art style.
It has relatively weaker effects than other LoRA, and won't dramatically change the image composition.
Cover images are the direct outputs from the vanilla (not finetuned) base model in a1111-sd-webui, no inpaint fixes, even no negative prompt. They demonstrate the effect of the LoRA, not clickbait.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Remember to leave feedback in comment section. So everyone can see it. Don't write feedback in Civitai review system, it was so poorly designed, literally nobody can find and see the review.
Have fun.
How to use
Just apply it. No trigger words needed. Also it does not patch text encoders. So you don't have to set the patch strength for text encoder (in comfyui, etc.).
Version prefix:
illus01 = Trained on illustriousXL v0.1
nbep11 = Trained on NoobAI e-pred v1.1 (compatible with v-pred)
NOTE (5/8/2025): 90% "illustrious" models now are actually NoobAI and you should use NoobAI LoRA instead. Because illustriousXL v0.1 was so poorly trained. Most (90%) models that labeled as "illustrious" are actually or mainly NoobAI. You can test your model with "1girl,ellen joe,red eyes,upper body,masterpiece". "ellen joe" is a new characters. If your base model is firmly based on illustrious, it will output a random person, otherwise, your base model is NoobAI. Remainer: There is a huge training gap of millions of images and steps between illustrious and NoobAI. So using illus LoRA on NoobAI is doable but not ideal.
Dataset
Only normal good looking things. No crazy art style. Not small (~2k images).
Every image is hand-picked by me. No AI images, no watermarks, etc.
There are 2 main dataset in the latest version:
a 2D/anime dataset with ~1k images. Character-focus. Natural poses. Natural body proportions. No exaggerated art, chibi, jojo pose, etc.
a real world photographs dataset with ~1k images. Contains nature, indoors, animals, buildings...many things, except human.
Why real world images? You can get better background, lighting, pixel level details/textures. There is no human in dataset so it won't affect characters.
You can read more about this sub dataset and why I added it, at here: Touching Grass. There is also a LoRA that was only trained on this photograph dataset. If you want something pure.
Older versions
New version == new stuffs and new attempt != better version for you base model.
You can check the "Update log" section to find old versions. It's ok to use different versions together just like mixing base models. As long as the sum of strengths does not > 1.
Update log
(4/25/2025): nbep11 v0.172.
Same new things in illus01 v1.93 ~ v1.121. Summary: New photographs dataset "Touching Grass". Better natural texture, background, lighting. Weaker character effects for better compatibility.
Better color accuracy and stability. (Comparing to nbep11 v0.160)
(4/17/2025): illus01 v1.121.
Rolled back to illustrious v0.1. illustrious v1.0 and newer versions were trained with AI images deliberately (maybe 30% of its dataset). Which is not ideal for LoRA training. I didn't notice until I read its paper.
Lower character style effect. Back to v1.23 level. Characters will have less details from this LoRA, but should have better compatibility. This is a trade-off.
Other things just same as below (v1.113).
(4/10/2025): illus11 v1.113.
Update: use this version only if you know your base model is based on Illustrious v1.1. Otherwise, use illus01 v1.121.
Trained on Illustrious v1.1.
New dataset "Touching Grass" added. Better natural texture, lighting and depth of field effect. Better background structural stability. Less deformed background, like deformed rooms, buildings.
Full natural language captions from LLM.
(3/30/2025): illus01 v1.93.
v1.72 was trained too hard. So I reduced it overall strength. Should have better compatibility.
(3/22/2025): nbep11 v0.160.
Same stuffs in illus v1.72.
(3/15/2025): illus01 v1.72
Same new texture and lighting dataset as mentioned in ani40z v0.4 below. More natural lighting and natural textures.
Added a small ~100 images dataset for hand enhancement, focusing on hand(s) with different tasks, like holding a glass or cup or something.
Removed all "simple background" images from dataset. -200 images.
Switched training tool from kohya to onetrainer. Changed LoRA architecture to DoRA.
(3/4/2025) ani40z v0.4
Trained on Animagine XL 4.0 ani40zero.
Added ~1k dataset focusing on natural dynamic lighting and real world texture. More natural lighting and natural textures.
Above: Added more real world images. More natural texture and details..
ani04 v0.1
Init version for Animagine XL 4.0. Mainly to fix Animagine 4.0 brightness issues. Better and higher contrast.
illus01 v1.23
nbep11 v0.138
Added some furry/non-human/other images to balance the dataset.
nbep11 v0.129
bad version, effect is too weak, just ignore it
nbep11 v0.114
Implemented "Full range colors". It will automatically balance the things towards "normal and good looking". Think of this as the "one-click photo auto enhance" button in most of photo editing tools. One downside of this optimization: It prevents high bias. For example, you want 95% of the image to be black, and 5% bright, instead of 50/50%
Added a little bit realistic data. More vivid details, lighting, less flat colors.
illus01 v1.7
nbep11 v0.96
More training images.
Then finetuned again on a small "wallpaper" dataset (Real wallpapers, the highest quality I could find. ~100 images). More improvements in details (noticeable in skin, hair) and contrast.
Above: Has a weak default style.
nbep11 v0.58
More images. Change the training parameters as close as to NoobAI base model.
illus01 v1.3
nbep11 v0.30
More images.
nbep11 v0.11: Trained on NoobAI epsilon pred v1.1.
Improved dataset tags. Improved LoRA structure and weight distribution. Should be more stable and have less impact on image composition.
illus01 v1.1
Trained on illustriousXL v0.1.
nbep10 v0.10
Trained on NoobAI epsilon pred v1.0.