Sign In

RDBT - Anima

Updated: Mar 22, 2026

base model

Verified:

SafeTensor

Type

LoRA

Stats

104

0

Reviews

Published

Mar 9, 2026

Base Model

Anima

Usage Tips

Strength: 1

Hash

AutoV2
CCC20059A4

Note: Still based on preview1, Do not use the LoRA on preview2. Mismatching base models = invalid weights = low quality.

p2 doesn't have better knowledge. I also believe RDBT already has better prompt adherence than p2. All my captions are natural language generate by Google Gemini, very comprehensive. LLMs nowadays are crazy.

I tried p2, also finetuned, but the quality is lower than p1 in my opinion. Styles are little bit weird.

So I'm sticking to p1, maybe until a big update.


RDBT [Anima]

Finetuned circlestone-labs/Anima. Experimental, but works.

CFG distilled to further improve quality and stability. And you can use CFG 1 to gen 2x faster.

Dataset:

~15k images. Contains common enhancements such as hands, clothes, lightings, backgrounds, etc.

Natural language captions from Gemini. Comprehensive and accurate. For better prompt adherence.

Usage:

This model is trained as LoRa, for better training and distribution efficiency. All you need is download this ~80MiB LoRA, and load it onto "it's base model" (the base model that trained the LoRA). If you don't know what this means, or which one is the correct "base model", you can download and use this base model, which has merged this LoRA: https://civitai.com/models/2356447.

Prefer natural language prompt. Prompt structure: style, subject, action, background.

Omit all the quality tags. You don't need those tags. The average output quality of this model should be higher than so-called "masterpiece".

Recommended settings:

  • "Euler a" sampler.

  • 20~30 steps.

  • cfg scale 1~3. Note: cfg 1 = disable negative prompt = 2x faster. Cover images are using cfg 1. cfg > 1 gives you better prompt adherence.

  • "Adjust Contrast" node to boost contrast. (If cfg 1)

If you want to use other "finetuned base model":

Remainder again: this is a finetuned model with a relatively big dataset, but trained as LoRa, for better training and distribution efficiency. It should be used as a finetuned model. It is NOT a tool to fix nor improve things for other base models.

Using this LoRA with a finetuned base model == merging two finetuned model, and the outcome won't be straight forward. So I don't recommend this. Any problems that happened with "finetuned base model" are unrelated to this LoRA.

But if you insist:

Use a "trained" base model: If a base model is trained by the creator, They definitely will tell you in the model page, like what's its base model, dataset, training settings, etc. Just make sure this "trained base model" share the same "base model" with this LoRA. "Trained" base model is stable, because it's very close to its "base model".

Do not use "merged" base model. If you can't find any info about "training" in the model page. You can assume the base model is not trained by the creator, so it should be a "merge". You would never know what's inside a "merged" model. "Merged" base model usually is unstable, it might be very far away from its labeled "base model" if it merged too many LoRAs.

Restrictions:

Sharing merges using this model is not allowed. This is a distilled model, it shouldn't been merged. If you think this LoRA is useful, please share the link or the LoRA file.


FAQ

Why v0.12+ has low contrast?

The base model of Anima, aka the Nvidia Cosmos Predict2, is a model for industrial robotics. It's not a model for aesthetic. The distillation method aligned what's the base mode good at.

Why CFG distilled?

TLDR: Because distillation can improve quality.

For example, this is what a 30 steps sampling process looks like. You can find more examples and workflow in cover images.

  • Up: RDBT v0.12 cfg 1.

  • Bottom: anima preview cfg 4.



Update log

f = finetuned, d = cfg distilled, preview = base model is anima preview

Recommended:

(3/14/2026) preview v0.19fd b:

Updated dataset. Some private datasets have been dropped. You might notice the style changed.

Fixed high-freq artifacts in v0.12, now you should get a clear image without noisy pixels.

b: Testing new distillation settings. Higher contrast. Aligned with common anime art.

(2/19/2026) preview v0.12fd:

Better stability and details, extended dataset.

===============

Old:

(3/8/2026) preview v0.11fd 512px:

Prove of concept version for v0.12. Same dataset and settings as v0.12, except it was trained with 512px res.

Released by request, as it might be very useful. Running model in 512px and cfg1 is extremely fast (x10 faster, e.g. 30s -> 3s). If you don't have a beefy GPU. You can use this version to test your ideas/prompts in few seconds.

(2/12/2026) preview v0.6d:

CFG distilled only. No finetuning. Cover images are using Animeyume v0.1.

(2/3/2026) preview v0.2fd:

Speedrun attempt, mainly for testing the training script. Limited training dataset. Only covered "1 person" images plus a little bit of "furry". But it works, and way better than what I expected.