this is a huge 384 dim LoRA, experimenting with adaptive learning rate
note I tried resizing this LoRA to lower dims using the kohya_ss tools, but the quality results were too subjective (some images degrade, others are improved)
training settings
3137 training images
512,512 training resolution
16 epochs, 6,384 steps, 8 batches
network dim & alpha 384
DAdaptAdam optimizer, constant scheduler
learning rate 1.0
bucketing & random crop
useful optional tokens
urushisato tagged on all images for general style boost
ova tagged on all screencaps from animation
background tagged on any image that has no human subject