Sign In

SDXL - Majestic Plastic Bag

30
128
936
5
Updated: Feb 1, 2024
conceptplastic bag
Verified:
SafeTensor
Type
LoRA
Stats
128
936
Reviews
Published
Jan 31, 2024
Base Model
SDXL 1.0
Trigger Words
drmbgg
plastic bag
Hash
AutoV2
30E178E934

A homage to the real star of American Beauty. Already love how the automatic tagging system on this site has trouble to figure out wtf is going on.

Upcoming tl/dr, be warned.

First, it was a test on what happens while training , what dataset is needed, quality of the pictures ( in this case terrible ), step count, what makes a model flexible or rigged etc.

After some time and using the output i got from the base, it got better and better, till i was left with a model that produces the perfect floating plastic bag... unacceptable.

20 Images - same background - low quality captures - 650 steps = super rigged but also pretty close to the trained data. Will add a brick wall, leafs and concrete floor even if not prompted to everything. Interesting is that the car model/age could fit into the time period the movie was made. Not familiar with casual old american cars though.

20 images + 10 higher quality AI generated pictures - 1250 steps = more flexible but was starting to stray away from the intended idea. Still biased towards more brick wally street pictures on an overcast day.

blabla... this went on for quite some while.

One of the reasons i don't train a model with pictures that actually show what i would probably prompt later, is that it has the potential kill the randomness. If i had trained my last model "Armored" with just one single picture that actually shows "a heavy armored cat", it would pretty much always gravitate towards this picture if you wanted an armored cat, making everything a lot less flexible. But, if the AI doesn't know what you want, that could also be beneficial in some cases.

I saw that in my first model i posted here ( Square Hammer ) . Only one picture with a moon in it, one picture with bats... no prompt was given for those pictures that would actually say "picture of a moon" "picture of bats", yet it absolutely likes too add those things. A lot of that can be negated by putting those words into the negative prompt, but its somehow inconvenient to do that.

Training a model on a concept like "xxx that looks like a jellybean" and you add just one picture of a woman that has a jellybean like appearance into a dataset of only 15 images, 14 jellybean like looking pictures of something the AI doesn't know what to do with and 1 picture of a very well know thing, like a woman, will make the model super biased going towards pictures of a woman. Quantity of the pictures and variety do matter. Also the amount of steps. More pictures most of the time means more steps.

The Loss rate is another thing. So many opinions if it's important or not. I would say it is, but again it depends on what you want. Only from my observation:

  • 15 pictures, good capitations, low loss rate ( like maybe 0.144 ) - just a few steps needed and everything is set, but also super rigged even after just 600 steps.

  • 15 pictures, no capitations, a concept know to the model, medium loss rate ( like 0.444 ), maybe 2000 steps needed, more flexible but tends to stray away from the concept.

  • 15 pictures, no captions, a concept of something the model has absolutely no idea what is going on or what you want to train, high loss rate ( 0.945 ), ludicrous amounts of steps needed or it doesn't even matter how many steps, success unlikely ... but, the training will pick something up. Maybe just a faint little thing that leads to something you didn't expect.

Have to say though, i rely on SDXL's capability to interpret some stuff on its own. Unexpected results are in my opinion the best thing. Of course, if you're really going for something specific, like a character or an idea that has to be exactly like you want it, that's a different story and most of the time easier to do.

A lot of that is mostly just based on speculation and observation. So many factors that can make or break something. If you really research a bit you find out that even the people who made the AI model and everything around it don't even know how their own creation works or what it can really do ( talking about the whole construct of AI not a specific model you can download )

Well, enjoy your randomly floating plastic bag.