Type | |
Stats | 3,925 3,315 |
Reviews | (332) |
Published | Sep 16, 2024 |
Base Model | |
Trigger Words | woman |
Hash | AutoV2 5B7919B792 |
Important: This Lora should be used with a NEGATIVE weight.
Also: The model is aimed at diversifying the look of women only. I plan to do a separate model for men later (and men with beards especially)
Also cool fact: the size of the Lora is only 4mb (it is limited to 2 blocks with a high rank)
So, What's a "Sameface"?
It's a common problem in generative AI, in which the model generates people with identical appearances. Unlike with SD1.5/SDXL, this is not just a result of the diverse dataset converging into a single token like "woman". in Flux, Dev/Schnell were distilled from the Pro version, resulting in a loss of diversity.
Why use Negative weight?
Lora was trained on the most generic photos of women that Flux generates.
If you then apply this Lora with a NEGATIVE weight, it will work as a sort of negative prompt, but without negative prompt!
By using this clever trick, Lora will basically tell Flux to generate anything except a typical Flux face. And as result, we'll get beautiful unique faces every time!
Why Lora and not tricks like lower Guidance?
People want diverse faces with high guidance in order to maintain high prompt adherence and image quality, that's why interventions at the latent space level are optimal.
How does it differ from other character Loras?
While other Loras introduce their own "samefaces", this Lora, only tells Flux what should NOT be in the generation, not what to add. As a result, this Lora diversifies people's appearance, rather than narrowing it even further.
How was the Lora actually made?
At first I was disappointed because nothing worked out. I was about to give up this project, but then tried to train Lora with just 2 layers, and suddenly during testing, I was met with beautiful faces, and with authentic, distant from Flux, appearance.
What's next?
I have a lot to experiment with. Different learning rates, layers and ranks have to be tested in order to improve quality. Merging different Lora variants might fix the problems with artifacts at higher values. I'll make sure to make a better version, If this Lora has enough interest
I'm really forward to your feedback, as this model is highly experimental. Any feedback is appreciated and will hopefully help me to improve the model.
special thanks to Ostris for AI-Toolkit