"ALX LoRA" is the story-showcase of my journey on training a Stable Diffusion model for my own character. There will be a lot of trial and error.
For the record, I have just general overview about what Stable Diffusion does, how certain settings affect the outcome, finding out both from tutorials and my own experience on prompting.
For prompting, I use Draw Things on a M1 Macbook Pro.
For the first attempt, I was using a model that was heavily overtrained. I can't exactly remember, but I remember setting an SD 1.5 Realistic training with around 50-60 steps per image, with bad prompts.
In the beginning, I didn't know why the pictures look the way they are, a.k.a. deep-fried. It was because of the high amount of steps training with a big Learning Rate (LR).
I used the epiCRealism Model to prompt, with the LoRA of my own face.
"We don't make mistakes, just happy little accidents." — Bob Ross
Just this way, I accidentally downscaled one of the images and the app automatically in-painted the missing canvas.
One of my photos used for training had some artifacts on the cheeks, it seems like they transferred over, even with negative prompts. Also, deep-fry is heavier on this one. I think I used Epoch 14/16, given the fact that the over-training happened by Epoch 3.
I saw that the eyes are looking awful in most of the results. Later on, found out that this can be fixed using the epiCNegative Helpers or by enabling VAE.
These are the best results.