santa hat
deerdeer nosedeer glow
Sign In

Abe - v1.0 Showcase

abe_00015.png
abe_00010.png
abe_00018.png
abe_00003.png
abe_00014.png
abe_00016.png
abe_00019.png
abe_00006.png
abe_00007.png
abe_00009.png
abe_00004.png
Images hidden due to mature content settings

I'm learning more about creating LoRA's with each one I create.

Lack of closeups

Abe taught me how the impact of specific examples of training data (or the lack thereof) limits the LoRA's abilities. For example, the dataset didn't have closeups examples so the model struggles with generating closeups.

Distilling a LoRA

When you generate LoRA's you can have it periodically generate checkpoints. The process I use creates 10 checkpoints. I've struggled to choose the best one. The first checkpoint is usually very limited while the last checkpoint is "overfitted" in that it adheres so much to the training data that it can bleed over when using the LoRA in other base models.

I've started thinking of the checkpoints as different distillation processes. The first checkpoint doesn't have much of the "essence" of what we've distilled while the last checkpoint has a much higher concentration.

The first checkpoint can be mixed with other LoRAs and base models pretty easily without the "essence" or flavor impacting the overall composition while the last checkpoint is so concentrated that it can overpower and bleed into the final composition.

For this LoRA, I choose the seventh checkpoint so there's a bit of flexibility and a strong likeness.