Sign In

How to Make a LoRA on Colab

How to Make a LoRA on Colab

Batch crop (1024x1024) and upscale (I use 4x_NMKD-UltraYandere_300k) under the extra tab in WebUI (batch from directory),uploaded to Drive, run through a Dataset maker (https://colab.research.google.com/github/hollowstrawberry/kohya-colab/blob/main/Dataset_Maker.ipynb) send to the XL Trainer (https://colab.research.google.com/github/hollowstrawberry/kohya-colab/blob/main/Lora_Trainer_XL.ipynb), and 10 hours later (overall). Ta Da!

If you want to make similar LoRAs, and have the means to pay for Colab Pro/Credits, it's as easy as:

Dataset Maker Settings:

project name - name your project (you can run this step before uploading to a folder on your drive and it'll make the required path, otherwise you can make the path and upload the dataset ahead of time using the same path structure)

skip to step 4 - Tag your images

method Anime tags (photo captions does get you results, but for generation I've found the list style of Anime tags to be more effective for creative results)

tag threshold 0.25

blacklist tags things you don't want tags (i.e. loli,child,shota,etc...)

caption min 25

caption max 350

global activation tags I just use the name of the LoRA to keep it simple, or none if I'm training a style

remove tags at the bottom, after running step 4 you can check your most common tags and remove the ones you don't want at this step - sometimes things you don't think of, or that are off topic, will show up and you don't want them (i.e. mosaic censoring,mole under eye,etc...)

Train your LoRA You can click the link, but you'll end up needing to go to the XL Trainer from there, otherwise it'll just bring you to a regular SD LoRA trainer by Hollowstrawberry

XL Trainer Settings:

Name of your project

Training model: Pony Diffusion V6 XL

load diffusers x

shuffle tags x

activation tags 1 (if you used one in the dataset maker, I only use 1 or none)

num repeats 1 for a lot of images 2 for under 500 images, but usually just 1

epochs 8 (6-8 has treated me really well, if I have only a handful of pictures I may run it as steps 2700-3200, but usually 8 epochs is fine)

save every n epochs 1

keep only last n epochs 1

unet lr 1e-4

text encoder 0.5e-2 (or 0 if it's a style without txt files and activation tags)

lr scheduler constant

lr scheduler number doesn't matter because of "constant"

warmup 0.05

min snr gamma 5

lora type LoRA

network dim 22-26

network alpha 11-13 (half of the dim)

conv doesn't matter with LoRA so don't change it

train batch size 6-8

cross attention sdpa

mixed precision bf16 (I connect to A100 because of the Colab Pro thing)

cache latents x

cache latents to drive x

optimizer Prodigy

optimizer args: decouple=True weight_decay=0.01 betas=[0.9,0.999] d_coef=2 use_bias_correction=True safeguard_warmup=True

recommended values for prodigy x (this overwrites the above line, but if you don't change the info it won't run)

Run and download final file from drive

There may be other settings that work, most of my LoRA are done in less than 30 minutes. Then I test them, and if I think they'd be good to upload, I'll upload them.

I should probably figure out how to make money doing it, but for now enjoy that it's free and now that you know how to do it, if you don't like what I upload you can make something you do like.

4

Comments