Type | |
Stats | 965 5,408 1.2k |
Reviews | (181) |
Published | Apr 15, 2025 |
Base Model | |
Usage Tips | Strength: 0.5 |
Hash | AutoV2 75CF389D6C |
Project: Touching Grass
The best LoRA for pixel level grass details.
Trained on real world high-res photographs.
Why does this LoRA exist?
To bring texture and lighting and background from reality to anime models.
What is the difference between this and the Stabilizer LoRA?
Stabilizer has very weak effects. Because the rule is "don't break things". So it may have no effect on super overfitted models. Also Stabilizer has another anime dataset to make anime characters look better.
This LoRA is trained very hard and has much stronger effects, so it can "overwrite" those super overfitted models if you want. 100% real world images.
What's this LoRA?
This LoRA is for users who like raw and pure things and like to balance weights themselves.
This LoRA is trained on a sub dataset of the Stabilizer LoRA.
This sub dataset:
Contains ~1K real world high quality photographs of objects and environment. No human. So it will not "pollute" your characters. Can be used on both anime and realistic models.
Very diverse and creative.
Has natural captions from LLM. Mainly because WD tagger v3 is really bad at real world images. Also because natural captions have more diverse vocabularies and can avoid overfitting.
What's the effect?
It really depends on your base model. Here is a quick comparations on WAI v13. With/without.
Pixel level natural details. A so-called "detailer". But instead of training on AI images to amplify fake details from noise to generate more fake objects. This LoRA focuses on natural texture. Less flat and smooth feelings. Notice the food, clothes, light reflection on the table, depth of field and blurry background.
Significantly improve background structural stability for anime models. Anime dataset doesn't contain much background knowledge. Most of are just "simple background". Even if some of them have some kind of background, they may be abstract art and lacking proper tags. So the base model will forget it or learn weird things during training. This LoRA was trained with tons of background/environment images with strong structural features.
How to use?
Trained on Illus v0.1. But also works on NoobAI.
No trigger word needed.
You don't have to set the patch strength for text encoder. This LoRA does not patch it.
Lower your CFG scales (-30%) for better details.
Share merges using this LoRA is prohibited. FYI, there are hidden trigger words to print invisible watermark. It works well even if the merge strength is 0.05. I coded the watermark and detector myself. I don't want to use it, but I can.
Update log
(4/15/2025) v0.2:
+30% images. Because there is a bug causing all avif files not being used in v0.1. Which is 30% of the dataset. lol.
Changed some parameters. Stronger, cleaner and more stable effect.
(4/02/2025) v0.1: init release.