Sign In

Sexy Toons feat. Pipa

885
4.8k
59
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
4,752
Reviews
Published
Apr 10, 2023
Base Model
SD 1.5
Hash
AutoV2
BC2D163E9E
default creator card background decoration
Grandfather Badge
alexds9's Avatar
alexds9


๐Ÿ‘— Clothes and Accessories by EDG โค๏ธ Support creator ๐Ÿซถ Discord
๐Ÿ‘€ Recommended: ๐Ÿ’› xVision ๐Ÿ—ฝHyper Realism 3 ๐Ÿ’ž Babes XL 3 ๐Ÿ’ Babes 3 ๐Ÿ’— Babes Classic XL โ™จ๏ธ Sassy Girls XL ๐ŸŒถ๏ธ Sassy Girls ๐Ÿงก Babes 2 ๐Ÿ‘‘ Game of Thrones โค๏ธโ€๐Ÿ”ฅ Divas โ„๏ธ Frozen Animation ๐Ÿ‘ Babes 1.1 ๐Ÿ’ Sexy Toons feat. Pipa ๐Ÿ’‹ Babes Kissable Lips ๐Ÿ‘ฉ๐Ÿฝโ€๐ŸŽค Noa Kirel ๐Ÿ‘ง Kimbra ๐Ÿ”ฎ Glass Orb ๐Ÿ’  Kaleidoscope ๐Ÿ‘—Holiday Outfit ๐Ÿ–ค Electra Heart
โ„น๏ธ This model's core idea was developed by Pipa by combining Galena Blend, JK, Helltaker, and QTori.

โ„น๏ธ Recipe:

  1. Galena Blend v1.2.

  2. Added Loras: JK 0.25, Helltaker 0.25, QTori 1.2, with SuperMerger.

  3. Added BeautyRaccoonMix v1 0.25.

  4. Added Mistoon_Anime 0.20.

  5. Add Difference 0.2, with Loras from step 2, minus Galena Blend from step 1.

  6. Added Babes Kissable Lips 0.15.

  7. Baked VAE kl-f8-anime2.

๐Ÿ“Œ Are your results notย 100% identicalย to any specific picture?

  1. Make sure to use Hires-fix from example SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), it is what I usually use for hires-fix.

  2. VAE is included inside. Use VAE:ย vae-ft-mse-840000-ema-pruned for better colors. Download it into "stable-diffusion-webui/models/VAE" folder. Select it in the settings.

  3. I use xformers - it's a small performance improvement that might change the results. It is not a must to have and can be hard to install. Can be enabled with a command argument "--xformers" when launching WebUI.

  4. WebUI is updated constantly with some changes that influence image generation. Many times technological progress is prioritized over backward compatibility.

  5. Hardware differences may influence changes. I've heard that a bunch of people tested the same prompt with the same settings, and the results weren't identical.

  6. I have seen on my own system, that when running as part of a batch, may change a little bit the results.

  7. I suspect there are hidden variables inside modules we can't change that produce slightly different results due to internal state changes.

  8. Any change in image dimension, steps, sampler, prompt, and many other things, can cause small or huge differences in results.

๐Ÿ“Œ Do you really want to get the exact result from the image? There are a few things that you can do, and possibly get even better results.

  1. Make a single word changes to prompt/negative prompt and test, and push it slowly to your desired direction.

  2. If the image has too much of something or doesn't have enough of something, try to useย emphasis. For example, too glossy? use "(glossy:0.8)", or less, or remove it from the prompt, or add it to the negative. Want more, use values 1.1-1.4, then additional descriptors in the same direction.

  3. Use variations - use the same seed, and to the right of the seed check "Extra". Set "Variation strength" to a low value of 0.05, generate a few images, and watch how big the changes are. Increase if you want more changes, and reduce if you want fewer changes. That way you can generate a huge amount of images that are very similar to the original, but some of them will be even better.

๐Ÿ“Œ Recommendations to improve your results:

  1. VAE is included inside. Use VAE for better colors and details. You can use VAE that comes with the model or download "vae-ft-mse-840000-ema-pruned from (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) , ckpt or safetensors file into "stable-diffusion-webui/models/VAE" folder. In the settings find "SD VAE", refresh it, and select "vae-ft-mse-840000-ema-pruned"(or the version included with the model). Click "Apply settings" button on the top. The VAE that comes with the model is "vae-ft-mse-840000-ema-pruned", you don't need both, use the one that you downloaded, it will work very well with most of the other models too.

  2. Use hires-fix, SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), first pass around 512x512, second above 960x960, and keep the ratio between the two passes the same if possible.

  3. Use negatives, but not too much. Add them when you see something you don't like.

  4. Use CFG 7.5 or lower, with heavy prompts, that use many emphases and are long, you can go as low as 3.5. And generally try to minimize the usage of emphasis, you can just put the more important things at the begging of the prompt. If everything is important, just don't use emphasis at all.

  5. Make changes cautiously, changes made at the beginning of the prompt have more influence. So every concept can throw your results drastically.

  6. Read and use the manual.

  7. Learn from others, copy prompts from images that look good, and play with them.

  8. DPM++ 2M Karras is the sampler of choice for many people, including me. 40 steps are plenty, and I usually use 20.

  9. Discord server for help, sharing, show-offs, experiments, and challenges.