santa hat
deerdeer nosedeer glow
Sign In

Large Dataset Lora Tips and Tricks

You're one of us nerds that puts AT LEAST 300 items in a folder and wonders why it's taking VERY LONG?

First off, I'm not here to supervise your LEARNING rates or schedulers, those are for the big kids to explain - I"m still learning those.

THIS IS MERELY About how on Lora training gui's and notebooks with a rough 5e4 Unet learning rate you can mitigate LONG times, and STILL attempt a reasonable quality Lora without compromising learning capabilities.

Ok so what's the deal?

Usually MOST anime based tips and tricks for LORA training recommend a 5e4 learning rate and a 1e4 text encoder. This gives STRONG learning, and VERY quality Loras etc - Now, there ARE CLEARLY MANY WAYS OF DOING THE LEARNING RATES!

However if you're stuck like me on sticking to exact details because MATH IS HARD let me help you!

While you CAN get away with a 50-100 OR LESS dataset, GET A DAMN good lora and still beat up the competition - some of us just have too much data, too many ideas -

Size Lot Comparison

Here's a small guide to batch size and epochs - the steps won't matter because 90% of kohya-based lora scripts use bucketed training.

50-100

This WONT break your colab or rental timing, but if you're looking to shorten timing pull your batch size to about 3. Your epochs to no less than 7 but no more than 10.

Remember: We're going off a specific learning rate that i've been taught, it doesn't mean i'm right - it's a preference

100-300

SIGHS here's where it gets funny - if you leave it at batch size of 2-3, leave it at 10 epochs and you use Holo's notebook or other scripts? IT MAY yell that you have too many steps. This is less so under 300 - but when you get closer to 300 and you leave everything at default it gets really slow.

Drop your epoch to MAXIMUM of 8.

Drag your batch size to 3, you can GET AWAY with 4 - and if you KNOW how to mitigate loss and learning quality you can jump it a bit more but not a lot.

I wouldn't do less than 5-6 on numbers of epochs.

300-500

I've never tested it BEYOND about 400-500, so beyond this is guess work. I'm currently doing one about 350-400 pics.

I dropped it THIS time to about 5 epochs (i'm conserving colab credits until I can figure out how I can get Vast and/or Runpod to work properly with Bmaltais or Derrian's gui)

My batch size on this? I cranked it to 4, because it doesn't really matter if the style loses a little bit, it's for a lora binding test later on.

COLAB USERS:

If you're training on FREE Colab this guide is likely FAR more important because even on COLAB pro you get disconnects before 5 hours.

BE WARNED: usually around the 1.5-2 hour mark colab gets REALLY DUMB and gives a script error for no apparent reason - as far as we're aware this makes NO difference in training, but it usually doesn't disconnect (it'd be nice if it did so you can fix your issues and not waste credits).

500 and Beyond.

YOU MEME NUTBALLS.

Yea I know one of you :P we talked recently in the model sharing area - Mitigation of a 6-8 hour training on Colab?

Drop your epochs to AT MOST 5.

If your style doesn't require too much THICC and strong effects? 5-6 batch size maximum.

Frankly on this note you'd want to LITERALLY research your schedulers, and your learning rate because you're only going to fix up timing SO much on a larger batch size.

To be QUITE FRANK: Do not do large sizes on colab unless you ARE AWARE of what you're doing LOL.

UPDATE:

500-1000 you can STILL do 5e4, UNLESS You know your math and settings - do not fudge your unet/text encoder settings. If you're on Holo's colab, finagle your repeats, 5-8 is usually fine if you have 800+. Finagle your batch size to at MOST 4. (Don't go beyond that).

Added:

Successfully did over 1000, it WORKS and it's solid - but it's a little weak because less repeats. Would recommend pushing your steps to as much as possible. Had not done text encoder which didn't do a lot because it just wasn't strong enough to retain it's style as much. This is ok though, it's all about preference!

Civit Trainer:

Batch Size 4 is your baseline minimum, and we've noticed that if you have a ZIP file, ti can handle a TON of data. (Your browser will hate you.) - Try not to fudge numbers too much, honestly, it sometimes is more about your repeats + steps + epochs for ME than it is anything else.

SM Crystal was 800+ and we did 7 epochs, 10 repeats i think? I don't know anymore.

BUt we did it on colab and f*cked up our numbers and it came out like utter crap. XD - So even the most seasoned of nerds still have things to learn!

Disclaimer

I am not a competent lora GUIDE level trainer, these are tricks i've picked up to mitigate my own things and how to sort of give a clue what you're doing.

As we go through this, we're able to help guide people through colab training notebooks if you want to get through your own LORA making.

We do suggest that people in time learn Bmaltais or Derrian Distro's gui scripts, because there is so much more to do outside the basics.

That doesn't mean our way of doing things is always perfect.

If you have ANY tips or tricks you'd like to add feel free!

How to Support Us:

LORA/Lycoris Request Form:

https://forms.gle/hjdCsUMg1NmPkCCp9

Join our Reddit: https://www.reddit.com/r/earthndusk/

If you got requests, or concerns, We're still looking for beta testers: JOIN THE DISCORD AND DEMAND THINGS OF US: https://discord.gg/5t2kYxt7An

Listen to the music that we've made that goes with our art: https://open.spotify.com/playlist/00R8x00YktB4u541imdSSf?si=b60d209385a74b38

We stream a lot of our testing on twitch: https://www.twitch.tv/duskfallcrew

EXCLUSIVE MODEL PRE RELEASES, AND OTHER GOODIES: https://ko-fi.com/DUSKFALLcrew

28