Type | |
Stats | 4,048 |
Reviews | |
Published | Jan 7, 2023 |
Base Model | |
Trigger Words | rpgicondiff |
Hash | AutoV2 8F016ED2C5 |
Overall info:
It was made for hand-painted object icons generations specificly (but I did some reaserch and tests and you can get a whole variety of really interesting style images using this model if you know what you are doing - like landscapes, interiors or even portraits.
Note that a model might be a bit overtrained - that means there is a little tradeoff in style over objects in your prompt (but when you give it a proper amount of work on your generations you might be able to achieve satysfying and quantifiable results).
Samples results are mostly cherrypicked (but all of them were not modified or improved in any way).
Good results might require some more work than just typing a prompt. (VERY IMPORTANT - LOOK BELOW↓)
Tips'n Tricks in using:
The most quantifiable balance between prompt, style and detail can be achieved by using bit of "prompt editing" and experimenting. Just keep the syntax of a trigger word like this: [rpgicondiff:4], where the number could be higher/lower depending on a variety of parameters. Shortly, the higher a number is a less style you get but probably more acurate.
I do not recommend using just raw trigger word because (due to bit of overtraining) it might make the generations too similar to model's fine-tuning images and does not reflect object from your prompts. The best results (from my experiences and experiments) is to use prompt editing. But yet, sometimes it generates quiet worth images with just trigger word so give it at least a try.
Parameters:
I got the best results using Euler a, Euler samplers, 20 and 50-70 steps, 7-15 cfg scale. (I provide you with additional non cherrypicked sheets from my experiments with various parameters to see for yourself)
You can declare negative prompt: "character,human, person, people, male, female, man, woman" if you want to increase a chance to get rid of characters from your images.
Workflow:
I recommend cfg 7 and 20 steps, prompt without a trigger word as a starting point. Just find an object and seed that you are satisfied with (which reflects an object you want to generate) and then use this seed with edited trigger word ,f.e [rpgicondiff:4] and experiment with it.
Increasing a steps count to 50-70 and cfg scale to 10 might improve a detail in your generation (note that it can also modify it)
I consider this method as the most effective in this iteration of the model.
Everything is a matter of experiments and trying to balance a style and a things we want to generate.
I provide you with a .ckpt and .safetensors extension to download.
Here you can find LoRA extracted from this model.