Let's talk about illustrious Lora's
Preface
I am currently using kohya_ss and derrian distro training gui to train my Lora, this article will mainly discuss what I've tried and I welcome others to discuss too as there's no official finetune guide.
Guidelines
Higher rates = stronger character features but potential loss in image quality
Lower rates = better image quality but weaker character features
Most character Loras work well with UNET around 0.0003 and TE around 0.00003
Lower learning rates will adapt the features better but can also take longer. As for the dataset lets say i have 40 images , 5-10 repeats, 10 epochs, 4 batch size, this usually adds up to the total steps and then hopefully a model is trained well enough
The ideal ratio is typically UNET:TE = 10:1
UNET Rates (0.0005 - 0.0001):
0.0005: Very strong influence, can overpower the base model. Good for exact character matching but may reduce image quality
0.0003: Balanced influence, commonly used for character Loras
0.0001: Subtle influence, maintains high image quality but character features may be less pronounced
Text Encoder (TE) Rates (0.00005 - 0.00001):
0.00005: Strong text conditioning, helps with character recognition
0.00003: Moderate text influence, good balance for most character Loras
0.00001: Light text conditioning, useful when you want minimal style transfer
Dimension Ranks (DR)
32: Standard/Default rank, good balance of detail and file size
64: Higher detail capture, larger file size
128: Very high detail, much larger file size
256: Maximum detail, extremely large file size
Network Alpha (AR)
Alpha is typically set to match or be slightly lower & higher than the rank.
Common ratios:
AR may be half the rank or even a quarter less than the DR
AR: Standard training stability (1:1 ratio), same as the DR
AR× 1.5: Increased stability, a quarter more than the DR
AR× 2: Maximum stability, double the DR
Tagging SDXL Images
use civitai for tagging small datasets its free and hassle-less
Include a negative prompt when using 3D or conflicting artystyles for character lora, we dont want to bake these features into the LoRa. you can do this very easily by adding to your current txt tags a combination like:
Where: [Negative Prompt][3d, cgi, render, plastic skin, glossy, ((3d model))]
The values below are not 100% but they are being figured out still.
Basic Character Lora (Base Model's preference)
DR 64, AR 32
- Best for: Simple anime/cartoon characters
- File size: ~70MB
- Good balance of detail and stability
Complex Character Lora
DR 64-48, AR 32-24
- Best for: Most character types
- File size: ~100MB
- Excellent for anime/game characters
Style Lora
Example at: https://civitai.com/models/1007864
DR 128, AR 64 to 32 - seems to be the best for a combination of complex features etc if the style is very detailed. otherwise lower ranks work too.
Learning rates can vary:
CAME and RAWR = 0.0002 UNET and 0.00002 TE will need about 2500 to 3000 steps
ADAMW8BIT & ADAFACTOR between 0.0003-0.0005 UNET and 0.00003-0.00005 at 1000 steps
Prodigy
I spent the last 7 hours trying to get an optimiser to work I'm not frustrated... just a little disappointed that I can't find good information on this. I went through all the training i could. high steps 3000 to low 1000 and I even tried dimension sizes etc only to find out... it just doesn't work with pony nor illustrious that well.
What works?
I'd like to hear what works and doesn't work for illustrious:
Optimizer
Learning Rates could change dependent on the optimiser chosen.
Scheduler
Network Settings
(DR) Dimension rank 128, 96, 64, 32, 16, 4
(AR) Alpha rank 128, 96, 64, 32, 16, 4
Don't use:
Prodigy
Can use:
AdamW8Bit
Constant
0.0003 LR (TE & UNET) - Aggressive Learning for characters
0.0002 LR - Medium learning for characters (DR 128 AR 64)
AdaFactor (CivitAI Default)
Scheduler
Cosine with restart
0.0005-0.0003 LR (UNET)
0.00005-0.00003 LR (TE)
DR 128-32, AR 64-16 - usually i go half the Network Dimension Rank
I Hope whatever I find and is discussed below will expand on the above.