Hey! Finally, I was able to make a lora that wasn't trained smally! Now that I can use fp16 and isn't stuck with bf16 on COLAB thanks to a update on kohya! It worked. So now I can do the same I did with NoobAI. The lora file is still small because I still used 8 alpha and 16 dim, like usual. This might be over-trained a bit IMO since 2510 steps is alot.
It's a lora of the style of zackary911
trigger word: @zackary911


