Sign In

SDXL Lora Training Guide [2024 April] [Colab] UPDATED !

SDXL Lora Training Guide [2024 April] [Colab] UPDATED !

! UPDATED April 2024 walkthrough



Notebook: https://github.com/MushroomFleet/unsorted-projects/blob/main/240406_V100_hollowstrawberry_Lora_Trainer_XL.ipynb
click the "open in colab" button

We will be using my forked notebook originally developed by Hollowstrawberry
You must specify a Project name, eg "myLora"

myLora

Then in Google Drive create "Loras" and then inside "myLora" to match your project name. Inside create the "dataset" folder, which will hold the image/text pairs.
google drive dataset path:

content/MyDrive/myLora/dataset/

Datasets should be matching pairs of images with text files of matching filenames.
The text files contain the captions for the image they are paired with.

eg. image1.png, image1.txt, image2.png, image2.txt

if you have subsets, you can enable "recursive" under buckets and latents caching, this will read the contents of all the folders in your dataset path.

I have preset the Fork to work well with SDXL using Prodigy at batch size 4. This will run well on a V100 instance, that should be accessible on the Free Tier. Be aware that a lot of images takes a lot of time, and you will be kicked for inactivity on free tier. Add your project as described then adjust the image repeats according to the instruction. This is the bare minimum to start a training.

Personally i train A100 at batch 30, so my exact settings may not be suitable for a T4. You should always adjust the Learning Rate with your Batch size. If you lower the Batch size, make sure to lower the Learning Rate while increasing the Epoch's. The LR adjustment is dependent on what you are training, so the tuning is to your taste. A solid starting point is provided with the April Fork.

Be sure to watch the video as i will run through the notebook explaining some details i may have missed here. There are many guides for and against different "recipes" for training Lora's. This notebook lets you try them all to find your best approach. I like to use:

Lora (locon), 32, 16, (16, 1)

However, Prodigy will set the Alpha=Dim, as indicated in the Trainer.

There are more settings further down the notebook, such as multi-folder support with ability to vary repeats per subset. More on this in the next video.

I have left my settings for Multi-res Noise & min SNR gamma in the notebook.

Unless you know what you are doing you should not need change anything.

Questions? leave me a comment.

49

Comments