Type | |
Stats | 704 |
Reviews | (77) |
Published | May 26, 2023 |
Base Model | |
Training | Steps: 8,000 |
Trigger Words | mrblng02-8000 |
Hash | AutoV2 0A358797CB |
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Update: 26 May 2023
Continued the TI training. Here’s version 2.80 (i.e. 8000 steps).
Depending on where you put mrblng02-8000 in your prompt, it can be over- or underwhelming. Despite the training images all being landscapes, this version has got better at objects/still lifes.
I had to come up with some new prompting to get the kind of looks I wanted. Mostly I’m trying to use the TI to add detail/pattern/texture of a type I like, and at step 8000 I feel it can provide balanced patterns on things in images, as well as provide a patterned structure.
It’s a strange journey with Stable Diffusion. I thought you had to use AND to make TIs and LORAs behave structurally, but I’m seeing it sometimes without the difficulty of AND... in things like the placement of trees or feathers or whatever, as well as the actual patterns on those things.
The journey continues: I’ve trained this TI up to 12,000 steps, but not released it because I’m struggling to get usable prompts now. Either I get pretty marbled-paper-pattern squares, or no apparent effect at all. It may be that 8000 steps is the limit for the 86 landscape training images I’ve used. I still intend adding objects/still lifes to the trainset and then regen the TI to see what difference that makes.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
This is a TI embedding that puts marbled paper patterns into your image generations. They tend to be illustrative rather than photorealistic - I assume that’s because the source material is illustrative?
Wikipedia Paper Marbling:
https://en.wikipedia.org/wiki/Paper_marbling
When I was young I used to have great fun making marbled paper out of all kinds of stuff. I’d let it dry and then draw in extra lines to bring out stuff that I could almost see. Like when you can imagine scenes in clouds... stalking tigers or leaping fish... or spaceships!
I wondered if Stable Diffusion could do something similar so I started trying to make this TI months ago... but gave up due to ignorance and frustration.
Then konyconi made the MarblingAI LORA and I got enthusiastic again:
https://civitai.com/models/55080/marblingai
Thanks to @konyconi for all the marvellous LORAs!
This TI is marbling, but different from konyconi’s model.
I’ve uploaded two TIs; from step 1500 and step 4000 of the training:
mrblng02-1500
mrblng02-4000
They’re v2.15 and v2.40. (Version 1 too inconsistent to release.)
V2.15 is quite simple and punchy. V2.40 is often subtler, but can produce more detailed pictures. I was expecting the marbling to get overwhelming the higher I went with the step-count, but that’s not what happened. Go figure.
The showcases are landscapes, since I was trying to get a particular style, but I’ve added a subsidiary gallery for each version to show that at least some objects are possible.
This TI is a bit niche, but I hope the showcase images will spark interest for somebody.
Training was mostly on the base Stable Diffusion v1-5-pruned.ckpt [e1441589a6], but also with avalonTruvision_v31.safetensors [f17ac2a0b7]. I think. Lack of sleep made me lose track a bit.
Most of the showcase gens are done with Avalon TRUvision. This is my go-to model for just about everything. This model is excellent at photoreal ppl, but it is so much richer than that. Highly recommended, and thanks to @avalon for producing it.
https://civitai.com/models/13020
Some models need a higher weighting, e.g. for ReV Animated I sometimes had to use (mrblng02-1500:1.1) or even higher if the effect was too subtle. Thanks to @s6yx by the way - ReV Animated is great to play with for all manner of stuff.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
The training here is a 2-step process:
(1) Found/made some hi-res scans of pre-20th century marbled endpapers and sampled 512px squares of the different types of shapes. Trained a few TIs on those. That gave me a TI that mostly wanted to draw more marbled patterns. But they looked authentic.
(2) Use that TI to generate 100s of images with txt2img and img2img using a wide range of prompts. Discard all the pure patterns and select good/interesting pictures from the rest. Train a new TI with those synthetic images.
I’ve only used landscape/terrain images in step (2). That means that this MarblingTI “likes” to produce landscapes rather than objects. It will often paint landscapes onto objects rather than just use marbling patterns. It’s a feature, not a bug :-)
However, much to my surprise (as is every bleeping thing with Stable Diffusion), it can do some objects quite well. And a lot of objects very badly... in the course of training, it seems to have “forgotten” a bunch of words.
I’m intending to create a set of object-related images using step (1) above, then see what happens in step (2). Might take a while.
Hopefully I can then combine both datasets to train a 3rd TI that is “happy” with both landscapes and objects. We’ll see. Got to find the time to do all this!