Sign In

🦊 Train Character LoRAs in Illustrious (Step-by-Step Guide)

Hey everyone!
I’ve been asked a lot about how I train my character LoRAs, so I thought I’d finally break it all down for you. This is the exact workflow I use for every LoRA I make — the same process that’s behind all the characters you see here.

It’s not the only way, but it’s what’s been working for me after a lot of trial and error. Hopefully this helps anyone curious about training their own!


šŸ”Ž Step 1: Finding the Character

Once I’ve decided who I want to train, it’s time to gather images. The more variety, the better.

Here’s where I usually search:

  • Danbooru (amazing for tagged references and knowing how to correctly tag your characters)

  • Google (filter by 2MP so you don’t get low quality stuff minimum is 512x512 body & 250xx250 if it's a face/profile picture)

  • Grabber (an open-source app that searches across multiple art sites)

  • rule34.xxx / e-hentai / DeviantArt

  • Original sources like anime, shows, games, 3D models, etc.

šŸ“Œ Tip: 16 images is enough to train, but I personally go for 100+. It sounds overboard, but more variety = better results.


āœ‚ļø Step 2: Prepping the Images

I used to cut out backgrounds and other characters… but honestly, it’s not necessary.

As long as your dataset isn’t full of repeating junk (text bubbles, duplicate objects, or the same side character multiple times), you’re fine.

Now I only crop when I have to and save myself the headache.


šŸ·ļø Step 3: Tagging in TagGUI

All my images go into one folder, which I open in TagGUI.

Here’s my tagging flow:

  1. Trigger word → A unique label that will always call the character.

    • Examples: 4lv1n, /\lvin, AlvinC

    • Important: make sure it’s not a normal Danbooru word.

  2. Species → furry, human, robot, autobot, etc.

  3. Features → fur/skin color, eyes, scars, unique details.

  4. Clothing → ā€œpurple collared shirt,ā€ ā€œwhite shorts,ā€ ā€œbrown sandals.ā€

šŸ’” I usually follow Danbooru’s tag descriptions for accuracy. If nothing exists, I just make up a consistent one.


⚔ Step 4: Auto-Tagging (Huge Time Saver)

Manually tagging everything is painful. That’s where auto-tagging comes in.

In TagGUI, I use these two:

  • SmilingWolf/wd-eva02-large-tagger-v3

  • SmilingWolf/wd-vit-large-tagger-v3

Run them both across all images. After that, I go back and delete any conflicting tags so my custom ones stay clean.


šŸ“¦ Step 5: Getting Ready for Training

When tagging is done:

  • I zip the images + text files together.

  • Upload the ZIP into Civitai’s online LoRA trainer.

On the trainer page, it’ll ask for example prompts. Here’s the type I usually give it:

  • 1boy, solo, trigger word, species, close up, side angle, smirk, looking at viewer

  • 1boy, solo, trigger word, species, clothes, full body

  • 1boy, solo, trigger word, species, naked, big penis, testicles


āš™ļø Step 6: Training Settings (My Defaults)

After lots of testing, these are the numbers I stick to:

  • Epochs: 20 → lets me check outputs at 5, 10, 15, and 20.

  • Num Repeats: Whatever makes total Steps ≄ 1180.

  • Shuffle Tags: ON

  • Keep Tokens: 2 → keeps trigger word + species stable.

  • Network Dim: 8 → smaller LoRA file size.

  • Network Alpha: 4 → smaller LoRA file size.

Then hit Submit and let it cook.


šŸ• Step 7: Reviewing Results

Training usually takes an hour or two. Once it’s ready, I download epochs 5-20 normally.

I always compare epochs 5, 10, 15, and 20 side-by-side. One of these will usually be the sweet spot where the LoRA looks both accurate and consistent and it's the one I go with.


šŸŽ‰ Wrapping It Up

And that’s it! That’s my full workflow from start to finish.

The most important lessons I’ve learned are:

  • Gather more images than you think you need.

  • Make your trigger word unique.

  • Don’t waste time cutting backgrounds unless you have to.

  • Always clean up your auto-tags.

  • Test multiple epochs — don’t just grab the first one.

This process has saved me so much time and frustration, and I hope it helps you out too. If you try it, let me know how it goes — or if you discover little tricks of your own along the way!

🧪 How I Avoid ā€œStyle Bake-Inā€

A question I get a lot: ā€œHow do you stop the LoRA from learning an artist’s style instead of the character?ā€ Two things make all the difference:

  1. Volume = Neutralization
    The more images you train on, the more the model ā€œaverages outā€ style bias. With 100+ images, you can get an efficient epoch 1 that’s character-true and not have a style baked in.

  2. Mix Sources/Styles on Purpose
    Blending references from different artists, sites, and mediums (screenshots, fanart, 3D renders, official art, etc.). This forces the LoRA to learn identity features (species, markings, colors, shapes) instead of one stylistic look.

TL;DR: More images + mixed sources = character fidelity without style lock-in.

P.S. I'd just like to clarify, I've stopped using fanart for training and use the original sources.

44

Comments