Sign In

Anima Preview 2: simple workflows for generation & inpainting/refining + tips & info

Updated: Apr 8, 2026

tool

Type

Workflows

Stats

389

0

Reviews

Published

Mar 31, 2026

Base Model

Anima

Hash

AutoV2
5F8AF514B2

What is this?

A couple of simple workflows for Anima preview 2. One for generation, one for inpainting/refining/img-to-img.

There's also info & tips inside the workflows, as well as below in this post.

All images in this post were generated in Anima using the generation workflow, and then further edited using the inpaint workflow.

Update 2026-04-08: Note on Preview 3

Warning: do not expect good results from this workflow with Preview 3. The new preview has different prompt/step/sampler requirements for ideal results, and I haven't figured them out yet. I'm working on it and will add a separate workflow for preview 3 when it's ready.

You're welcome to mess around with Preview 3 on your own of course, but if you just want good results right now you should stick with Preview 2 for this workflow!

Nodes & Models

Custom Nodes:

RES4LYF - A very popular set of samplers & schedulers, and some very helpful nodes. We want this for the Clownshark Ksampler, which massively helps with the Anima model's consistency.

RGTHREE - (Optional) A popular set of helper nodes. If you don't want this you can just delete the seed generator and lora stacker nodes, then use the default comfy lora nodes instead. RES4LYF comes with a seed generator node as well, I just like RGTHREE's more.

ComyUI Essentials - (Inpaint workflow only) Adds a bunch of very helpul nodes. We're just using it here for its number comparison node so we can switch between the image-to-image and inpainting modes automatically, which is a limitation of the LayerStyle node below.

ComfyUI LayerStyle - (Inpaint workflow only) Adds a ton of nodes for image transformations, similar to the tools in photoshop. We're using this for its image blending node, which allows us to blend two images using a semi-transparent mask.

Models:

Main Model: Anima Preview 2

Text Encoder: Qwen 3 0.6b

VAE: Qwen Image VAE

The main model is only around 4.5GB, which means you can run it with as little as 6GB VRAM.

Why use Anima over Illustrious/SDXL?

You might not want or need to, but Anima does have at least one big thing going for it: you can use plain language to describe your images. Being forced to use booru tags with the SDXL models severely limits what you can get out of them, and being able to use normal language is a big unlocker. Also it's just pretty damn good already, and it's only going to get better from here.

Info & Tips

ClownShark Ksampler - what is it and why are we using it?

It's a node from the RES4LYF pack. It works the same as a normal sampler, but with a few differences:

  1. "ETA". This setting basically adds extra noise during sampling using fancy math, and it generally helps get more detail out of generations, and makes them more stable. It's insane how much of a difference this makes with some models. For the Anima model, it has a massive stabilising effect on the quality of outputs. Normally the Anima model is a bit janky to work with, but this Ksampler magically improves the consistency via ETA with no drawbacks.

  2. "bongmath". This setting turns on bongmath. It's some kind black magic that improves sampling results without any downsides - but only if you're using the sampler a certain way. It does nothing in these particular workflows. Someone tries to explain what it is here: https://www.reddit.com/r/StableDiffusion/comments/1l5uh4d/someone_needs_to_explain_bongmath/

  3. It has access to a ton of alternative samplers/schedulers. We're not using them here, but for some models they're really, really, really good.

  4. It has some funky mechanisms for doing follow-on/continuous generation by chaining multiple samplers together (this is what the "sampler_mode" setting is for). We're not using that in these workflows either.

So, essentially we're just using it for the ETA setting in these workflows. But boy is it worth it.

Upscaling

I upscaled some of these post images using 2xAnimeSharpV4. If you need an upscale workflow + links to upscale models the one I use is here: pastebin

And some of them I upscaled using SeedVR2, workflow for that here: pastebin

Note that the SeedVR2 upscaler requires at least 16GB of VRAM, it's a beefy model even when you're only upscaling a single image.

Generation Notes

The settings I've left inside give very good results. You can range the ETA setting on the sampler from 0.2 to 0.5 and the steps from 22 - 30 to get different types of results as well. More steps = deeper shadows, which is good/bad depending on the style you want. I prefer around 24 steps personally.

If you want stuff to lean more towards anime, just put "anime" or "digital anime" at the end of the basic positive prompt starter. You can play around with other style prompts like "digital painting" too, or even specific artist styles. Or just don't specify and see what happens.

It also knows pretty much all the popular characters already so try typing those in.

A Limitation

The Anima model uses the qwen image VAE, which means you'll sometimes get a noticeable grid-like halftone pattern in your images. This effect is more noticeable at higher resolutions. You can alleviate it by using the SeedVR2 upscaler, but that's not always desirable (not always the best upscaler). I'm looking into it but haven't found another solution yet - if you happen to know a way to fix it, please let me know!

Note on resolutions

I've left some good starting resolutions in there, but the model is capable of going higher. Just start with a lower res, then resize it and feed it into the inpainting workflow to do img-to-img at around 40 denoise or so. I've tested 1440p and it works pretty well.