Updated: Apr 13, 2026
base modelSee Update Log section for version info.
RDBT [Anima]
Finetuned circlestone-labs/Anima.
Dataset: ~60k images. Handpicked. High aesthetic only. Zero AI slop. All captions are natural language from Gemini. Includes sub-datasets for common enhancements such as eyes, faces, hands, clothes, lightings, backgrounds, etc.
Trained as a LoRA for better training and distribution efficiency. Then cfg/dmd2 distilled for better stability and quality.
No overfitted default style. Still creative, but more stable and aesthetic.
Restrictions: Sharing merges using this model is not allowed. If you think this LoRA is useful, please share the link or the LoRA file.
If you are using a "custom" base model and this LoRA "breaks" the base model, instead of blaming the LoRA, try to look at the problem from a different angle: "Has this custom base model already merged this LoRA? And you merged it twice and the weights collapsed."
Usage:
Base model
Download corresponding pretrained base model. Official HF link.
If you can't load LoRA, or just want to use a "finetuned base model" and forget about LoRA, you can download this base model, which has merged the LoRA: https://civitai.com/models/2356447.
Prompt
Prefer natural language prompt. Prompt structure: style, subject, action, background.
You can omit all the quality tags. The quality of training data is higher than so-called "masterpiece". Quality tags don't have noticeable effects.
There are two "rough" trigger words:
"digital anime illustration": 2d anime.
"digital art", 2d art but not anime, mostly digital art. (not many samples)
Recommended settings:
"er_sde", "euler" or "euler_a" sampler.
dmd2 distilled: 8~16 steps. cfg distilled: 20~30 steps.
cfg scale 1~3. Prefer cfg 1 (disable cfg, smooth sampling process, 2x faster). Enable cfg (cfg >1) if you need higher prompt adherence (e.g. style is too weak). High cfg is not necessary. Cover images are all using cfg 1.
FAQ:
Why distilled?
Distillation has become the mainstream. Distilled models are currently the most popular. (zimage turbo, flux klein...). Not just because they are fast, but also because they offer higher quality and stability. That's how distillation works: you make the model reinforce the good aspects and forget the bad ones.
If you still think distillation is a bad thing, you're probably looking at reviews from 3 years ago.
cfg distillation and dmd2 :
cfg distillation: Basic. Has stable sampling process thus has better textures and details. Small stability improvement. E.g.: img.
dmd2 (Improved Distribution Matching Distillation): The most popular one. cfg + step distillation. Huge stability improvement.
I recommend dmd2 to most users. Which has higher stability and overall quality, also faster.
Use cfg distilled version if you prefer pure aesthetic/creative mode and can handle stability issues. (creative also means chaotic, they are the same)
Update log
[base model version] [finetuned model version] [distillation method] [distillation version]
f = finetuned, d = cfg distilled, dmd2 = distribution matching distillation.
===============
(4/1/2026) p3 v0.24fd b: Fixed bug in previous version.
(4/10/2026) p3 v0.24f dmd2: roughly 12~16 steps distillation, but it doesn't matter, vanilla anima can converge in 20 steps, just pretend there is no step distillation. Although 8 steps is still doable. This is intentional. Low steps distillation = ai slop style without complex texture. I want to utilize dmd2 to improve stability, not speed (although it can, but not my goal.)
(4/8/2026) p3 v0.24fd: Rebased on preview3. Finetuned base model has -40% steps than v0.23. Less overfitted (?).
Update: There is a bug in distillation that causes huge quality downgrade when using without cfg.
(4/4/2026) p2 v0.23f dmd2 b: Different distillation settings. Almost a 4 steps dmd2. Maximum stability.
(4/4/2026) p2 v0.23f dmd2: 8 steps dmd2. First dmd2 anima (?).
(3/28/2026) p2 v0.23fd: Rebased on preview2. Distillation: improved small details and stability (removed a regularization in distillation target and changed to second-order method).
Voting result: v0.20fd won. Thanks for the feedbacks.
(3/24/2026) preview1 v0.20fd b: Distillation: Different settings optimized for anime, high contrast and saturation.
(3/23/2026) p1 v0.20fd: Dataset: More furry. Finetuned base model: from v0.12 + more 100% steps. Distillation: Fixed noisy pixels this time, really.
(3/14/2026) preview v0.19fd b:
Updated dataset. Some private datasets have been dropped. You might notice the style changed.
Fixed high-freq artifacts in v0.12, now you should get a clear image without noisy pixels
b: Testing new distillation settings. Higher contrast. Aligned with common anime art.
(2/19/2026) preview v0.12fd:
Better stability and details, extended dataset.
(3/8/2026) preview v0.11fd 512px:
Prove of concept version for v0.12. Same dataset and settings as v0.12, except it was trained with 512px res.
Released by request, as it might be very useful. Running model in 512px and cfg1 is extremely fast (x10 faster, e.g. 30s -> 3s). If you don't have a beefy GPU. You can use this version to test your ideas/prompts in few seconds.
(2/12/2026) preview v0.6d:
CFG distilled only. No finetuning. Cover images are using Animeyume v0.1.
(2/3/2026) preview v0.2fd:
Speedrun attempt, mainly for testing the training script. Limited training dataset. Only covered "1 person" images plus a little bit of "furry". But it works, and way better than what I expected.

