Sign In

Pohjolantaika

141

849

47

Updated: Oct 13, 2025

base modelanimesemi realistic

Verified:

SafeTensor

Type

Checkpoint Merge

Stats

358

672

426

Reviews

Published

Jul 9, 2025

Base Model

Illustrious

Hash

AutoV2
110A269EAC

NOTE! About 1.3b:

This might be the last version, as my HDD containing all Stable Diffusion checkpoints & images broke few days after making this version. So can't really remake it even anymore as lost the recipe as well.

Got a separate instance for SD generation which contained some stuff, but most of them was lost.


NOTE! Added guide for inpainting in Forge to this info text with some tips if you are interested in it, see more below.


This is a illustrious based, sort of semi-realistic anime-style checkpoint merge.

There are plenty of other (even better) checkpoints, but they are heading into some other directions. So made a merge of illustrious for personal use, with emphasis on some styles / loras / and generally things I like.

So no guarantee it does anything. Probably some stuff broke on the way too.

This has no "grand direction", it's more like what I'm messing up with at the time.


Shortly about Inpainting/denoising for 1.2b in Forge:

Can use rather high denoising strength in Forge. Like 0.4 or 0.5 to get the details out, while still retaining the picture composition while inpainting, if you mask items individually, or clothing part piece by piece (like shirt or armor piece individually).

Works nicely in amulets or smaller detailed items.

0.1-0.2 denoising works for plain cloth or items which do not need intricate details.

This all depends on the prompt if it's stable and if there are any Loras in use.

Base model itself can do this ok with stable prompt. The more Loras are added, the more harder the inpainting gets as the weights start to fluctuate for a small inpanted area.

One example is: if one lora has higher weights, like face, then it'll draw that instead of the item if high denoising is used and piece has appropiate colors, etc. (kinda same as while using adetailer).

Of course you can adjust the prompt & remove stuff or Loras while inpainting, then return back to full prompt and try for another piece if it works.


Longer guide for inpainting in Forge:

1a. Make an image with txt2img

  • You can make it with or without hiresfix

  • For hiresfix 1.25x or 1.5x upscale, you can use 0.4 or 0.5 denoising (depends on prompt or loras in use)

  • After generation, below image choose palette icon with text "Send image and generation parameters to img2img inpaint tab."

1b. OR input image from PNG Info tab:

  • Can be any image you have made previously, as long as it has the image generation information included

  • Drag and drop image to the source part "drag image here" or click it to get file open dialog

  • After choose "Send to inpaint"

    NOTE! Doing this will most likely reset the "Inpaint area" button to "Whole Picture" in inpaint tab, so remember to change it back to "Only masked", more into this below.

2. Doing it this way should automatically navigate the UI to img2img -> Generation -> inpaint tab

3. Few things to check before doing any inpainting:

a. In general, these should be filled automatically, but just quick doublecheck. These all should be the same as the one image was generated with:

  • Sampling method

  • Schedule type

  • Sampling steps

  • CFG Scale

  • Seed

b. Inpaint Area setting:

  • Change to "Only masked", to only inpaint the masked area and leave other parts of the picture untouched

  • This will enhance the details on the masked part (item/clothing piece/etc)

c. Resize to / Resize by setting:

  • Change the setting to "Resize by" with Scale: 1

  • This forces an upscale into the masked part, but does not increase overall proportion of the image

  • This usually allows the details to come out better when inpainting and also to do several inpaintings into one image, one after another

d. Additional settings. Shouldn't need to change these:

  • Resize mode: Just resize

  • Masked content: original

    NOTE! if you have lama cleaner, can choose to use it here to remove watermarks. Just don't use it for regular inpainting!

3. Denoising strength, depending what you want to do:

a. NOTE1: This is most likely something you will be adjusting during inpainting, piece by piece or per masked area you want to change

b. NOTE2: Sometimes the masked area also effects the denoising how it works, so occasionally it may be better to just increase the size of the mask, instead of changing denoising strength

c. denoising values for different purposes:

  • 0.5 - in general good startup point trying to do 1:1 increased details on a piece, but might "hallucinate" new stuff into it depending on prompt. Might bring up the "original improved details" which were lacking from basic txt2img generation.

  • 0.4 - reduces "hallucination", but still brings up details

  • 0.2 - flattens out cloth or items (like metal) and edges, without much "additional details" added into them

  • 0.1 - just to adjust edges and slight smoothing, etc

    EXTRA:

  • 0.55 - 0.6 - 0.7 if you want to try to completely redesign a clothing piece or item (like necklace/amulet/bracer whatnot). Also works with hands if you want to get rid of extra fingers, but this really depends on masked area size too (can be hectic at times).

4. Controls in the actual inpainting / masked area editing in the visible image section:

Controls and there are small buttons in the top of the image, when mouse is hovering over the image section:

  • Use left mouse button to mask an area for inpainting

  • Use right mouse button to move the image in the inpaint section

  • Undo button will remove last masked area

  • Redo will bring back last masked area, if it was removed with undo

  • Reset will clear all the masked areas (but pressing undo will bring the previous ones back)

  • Center position will bring up the unzoomed image area and center image

  • Mouse wheel to zoom in / zoom out

  • Remove will remove the image from inpainting (do not press unless really sure!)

  • Brush size slider, to increase or decrease the size of the brush used to mask the area

5. Actual inpainting:

  • Zoom in into the area you want to focus on

  • Use mouse to mask the area with the brush

  • Limit the mask to the edges of the item, like piece of clothing. This model likes to do black outlines, which you can use as a guide

  • Occasionally works better if you adjust the mask bit larger than the actual piece, it allows the model to retain the "idea" of the item better what it had during generation.

  • If it's a sword in a hand, you may want to mask some extra like the sword hilt the hand and part of arm/sleeve also, as otherwise the inpainting might do wacky things to the hand/fingers

  • E.g. for swords, you can do the blade and hilt separately. May have to experiment depending on item

  • For pieces of clothing/armor, depends on piece but either complete piece, or by section like center piece first, then sleeves separately (or shoulder / elbow / etc pieces one by one)

  • Generally experience helps, the more you do, the more you get the idea of how to adjust the mask in what situation or content you make (if it's 2d/3d/semi-realistic, etc).

  • Try to limit the inpainted area size, do not make it TOO large or the quality will drop again and it will not do the increased details.

  • Might be better to do garments layer by layer: first the ones beneath and then the top ones to improve quality. But sometimes it may work better to mask them all and the generation will fix seams & layers like shirt tugged under pants to look better visually, or you can just inpaint the seam itself or like corset strings.

  • If you're happy with the mask, click Generate and it'll do the inpainting to the area under the mask

6. After inpainting:

  • If the actual size of the item was changed (edges/hair/threads or finger positions), there may be "artifacts" left behind when things change, on the edges of the item/area. These can be adjusted away by changing the mask size on that edge a bit larger, and it'll remove the artifacts on next generation (generate the picture again).

    HOWEVER this will also change the inpainting results themselves (for better or for worse), so check the result again if it's what you want (that's why there are undo/redo buttons!).

  • If you are happy with the changed piece, from below result image choose palette icon with text "Send image and generation parameters to img2img inpaint tab.", and it will refresh the inpainting image to the one you just did. This will allow you to keep inpainting, and you can move on to the next piece/part you want to change.

    As the "Resize by" scale is 1, the image size does not increase and it'll allow you to do inpainting on that same image over and over again.

  • If the result was not what you wanted, or you want to go back few steps into an image which was better. Go to PNG info tab -> drag & drop the image you want and send to inpaint, and it will bring back that old version for inpainting.

    NOTE! Doing this will most likely reset the "Inpaint area" button to "Whole Picture" -> Change it back to "Only masked". Doing it this way retains the masked area you had previously in inpainting.

  • Also to note that the Loras used in generation WILL affect the inpainting result, as they change the weights of the image generation.

    If you have alot of loras with high weights, they WILL force themselves to the inpainting and it may happen that inpainting generation will create a completely new image to that masked area with high denoising, instead of improving details of the original image, as the lora weights skew the result what it's supposed to be doing.

    You can work around this by lowering the Lora weights & adjusting the prompt in general during inpainting.


This inpainting process is not limited to this model, it also works with other models.

However, depending on the model/checkpoint, the actual details may not improve at all, or just produce garbage no matter what you do (just make sure inpaint area is really "Only masked" and the inpaint area is not too large!).

It may be that the model is not suited for inpainting, when this model should be able to do both: generation and inpainting.