Updated: Feb 13, 2026
base modelRDBT [Anima]
Note: Experimental, but works.
"General" finetuned + CFG distilled circlestone-labs/Anima.
With natural language captions from Gemini. But still contains danbooru tags.
Why LoRA? I can save 8GiB VRAM and you can save 98% storage and data usage.
Usage
Don't expect a straightforward effect. The dataset is not small.
You need to specify styles in your prompt.
If CFG distilled:
Use CFG scale = 1. Prefer euler a and euler.
Because you don't need to run a forward pass for the negative prompt, it's 2x faster. (Almost as fast as SDXL)
Versions
[base model version] [dataset version] [other flags]
Abbr:
p = anima preview
f = finetuned
d = cfg distilled.
(2/12/2026) p v0.6d: cfg distilled on the "anima preview". No finetuning. You can use it on any checkpoint based on "preview". E.g. Cover images are using Animeyume v0.1.
(2/3/2026) p v0.2fd: finetuning + cfg distillation. Speedrun attempt, mainly for testing the training script. Limited training dataset. Only covered "1 person" images plus a little bit of "furry". But it works, and way better than what I expected.

