santa hat
deerdeer nosedeer glow
Sign In

Large Dataset Lora Tips and Tricks (Google Colab + SD 1.5 optimized)

Large Dataset Lora Tips and Tricks (Google Colab + SD 1.5 optimized)

Large Dataset LoRA Tips and Tricks

Likely published originally earlier in 2023.

This guide is optimized for Stable Diffusion 1.5, with considerations for Civitai on-site training and Google Colab. While primarily focused on SD 1.5, updates for SDXL, Pony XL, and other versions are planned.


Introduction

If you're like us and load at least 300 items into a folder and then wonder why training takes so long, you're in the right place!

This guide isn't about learning rates or schedulers; those are complex topics left for advanced users. Instead, we'll focus on how to manage training times and maintain quality when using a 5e-4 UNet learning rate for LoRA training.

Standard Settings

For most anime-based LoRA training, a 5e-4 learning rate for UNet and a 1e-4 learning rate for the text encoder are recommended. This setup typically provides strong learning and high-quality LoRAs. However, learning rates can vary, and precise details are often a matter of preference.

Dataset Size and Training Strategies

Colab Users

50-100 Images

  • Batch size: 1-3

  • Epochs: 7-10

This size won't break your Colab or rental timing, but adjusting the batch size and epochs can shorten training time.

100-300 Images

  • Batch size: 2-3

  • Epochs: 5-8

As you approach 300 images, training can slow down. Reducing epochs and slightly increasing batch size can help manage time.

300-500 Images

  • Batch size: 4

  • Epochs: 5

Training with 350-400 images requires balancing batch size and epochs. This configuration is for conserving Colab credits and managing time effectively.

Colab Users

Free Colab users should pay special attention to these guidelines since Colab Pro can also disconnect before 5 hours. Be aware that Colab might throw a script error around the 1.5-2 hour mark, which usually doesn’t disrupt training.

500+ Images

  • Batch size: 5-6

  • Epochs: 5

For larger datasets, thoroughly research your schedulers and learning rates to optimize training time. Avoid large sizes on Colab unless you are experienced.

500-1000 Images

  • Learning rate: 5e-4

  • Repeats: 5-8 for 800+ images

  • Batch size: Maximum of 4

1000+ Images

Successfully trained over 1000 images with slight quality trade-offs due to fewer repeats. Increase steps as much as possible to retain style.

General Tips for LoRA Training on Stable Diffusion

  1. Learning Rate Management: Adjust based on dataset size to prevent overfitting.

  2. Scheduler Optimization: Experiment with different learning rate schedulers.

  3. Augmentation: Use data augmentation to increase dataset size and improve model performance.

  4. Validation: Keep a portion of the dataset for validation to monitor overfitting.

  5. Regularization Techniques: Implement dropout, weight decay, or other regularization methods.

  6. Mixed Precision Training: Use mixed precision training to speed up the process and reduce memory usage.

  7. GPU Utilization: Ensure full GPU utilization by checking for bottlenecks in data loading and preprocessing.

  8. Experimentation: Track experiments to understand the impact of different settings.

Civitai Trainer Specifics

Civitai's trainer supports varied settings for different models, including:

  • SD 1.5 with Anime, Realistic, Semi-Realistic, and SD 1.5 Base models

  • SDXL

  • Pony XL

  • Custom Training Models

Image Limitations

  • Auto-Tagging on Site: Maximum of 1,000 images in the zip file without captions.

  • Captioned Off-Site: Maximum of 1,000 files in total.

Optimizers

  • SDXL and Pony XL: Use AdaFactor and Prodigy

  • SD 1.5: Use AdamW8Bit

Regardless of the model or optimizer, a learning rate of 5e-4 is generally effective across all setups.

Disclaimer

I’m not a top-tier LoRA guide-level trainer. These are just tips I've picked up to manage my own training sessions. We can help guide people through Colab training notebooks if you want to create your own LoRAs.

How to Support Us

Feel free to add any tips or tricks you've discovered!

72

Comments