Sign In

Babes Kissable Lips

569
2.9k
68
Verified:
SafeTensor
Type
Checkpoint Trained
Stats
723
Reviews
Uploaded
Apr 1, 2024
Base Model
SD 1.5
Hash
AutoV2
8909556BDA
0
0
0
0

๐Ÿ‘— Clothes and Accessories by EDG โค๏ธ Support creator ๐Ÿซถ Discord
๐Ÿ‘€ Recommended: ๐Ÿ’™ iVision ๐Ÿ’œ xVision XL ๐Ÿ’› xVision ๐Ÿ—ฝHyper Realism 3 ๐Ÿ’– Babes 3.1|๐Ÿ’ 3|๐Ÿงก 2|๐Ÿ‘ 1.1|๐Ÿ’ž XL 3|๐Ÿ’— Classic โ™จ๏ธ Sassy Girls XL ๐ŸŒถ๏ธ Sassy Girls๐Ÿ‘‘ Game of Thrones โค๏ธโ€๐Ÿ”ฅ Divas โ„๏ธ Frozen Animation ๐Ÿ’ Sexy Toons feat. Pipa ๐Ÿ’‹ Babes Kissable Lips ๐Ÿ‘ฉ๐Ÿฝโ€๐ŸŽค Noa Kirel ๐Ÿ‘ง Kimbra ๐Ÿ”ฎ Glass Orb ๐Ÿ’  Kaleidoscope ๐Ÿ‘—Holiday Outfit ๐Ÿ–ค Electra Heart ๐Ÿ“ธ Dynamic Shots
๐Ÿ’‹ If you like Big Lips and you can't lie, you need to check out this new model!
โ„น๏ธ The Babes Kissable Lips 2 - Pretty much everything has been changed.
โ„น๏ธ The Babes Kissable Lips 2 can make truly wonderful images.
โ„น๏ธ The Babes Kissable Lips 2 doesn't require particular trigger words, but some trigger words from my other models might work.
โ„น๏ธ The Babes Kissable Lips 1 model is based on a brand new training, that is mixed with Babes 1.1 recipe, also it has been inspired a little bit by RPG v4.
โ„น๏ธ The core of Babes Kissable Lips 1 model is different from Babes 1.1 and Exp 7/8, so it has its unique style with a preference for Big Lips(and who knows what else, you tell me).
โ„น๏ธ This model also seems to process the prompt a little bit differently than previous versions, more attentive and influenced by the prompt, so if you have skeletons in your prompt you might find them.
โ„น๏ธ This model does not produce unwanted noise/particles, that were related to freckles, so it's an improvement.
โ„น๏ธ It should be more NSFW friendly.
โ„น๏ธ It contains VAE backed in.
โ„น๏ธ Trigger words, not guaranteed, experiment yourself and report: "samdoesart style", "thepit style", "orwel style", "cherrymouse style", and other trigger words from Babes 1.1.

๐Ÿ“Œ Are your results notย 100% identicalย to any specific picture?

  1. Make sure to use Hires-fix from example SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), it is what I usually use for hires-fix.

  2. Use VAE:ย vae-ft-mse-840000-ema-pruned for better colors. Download it into "stable-diffusion-webui/models/VAE" folder. Select it in the settings.

  3. I use xformers - it's a small performance improvement that might change the results. It is not a must to have and can be hard to install. Can be enabled with a command argument "--xformers" when launching WebUI.

  4. WebUI is updated constantly with some changes that influence image generation. Many times technological progress is prioritized over backward compatibility.

  5. Hardware differences may influence changes. I've heard that a bunch of people tested the same prompt with the same settings, and the results weren't identical.

  6. I have seen on my own system, that when running as part of a batch, may change a little bit the results.

  7. I suspect there are hidden variables inside modules we can't change that produce slightly different results due to internal state changes.

  8. Any change in image dimension, steps, sampler, prompt, and many other things, can cause small or huge differences in results.

๐Ÿ“Œ Do you really want to get the exact result from the image? There are a few things that you can do, and possibly get even better results.

  1. Make a single word changes to prompt/negative prompt and test, and push it slowly to your desired direction.

  2. If the image has too much of something or doesn't have enough of something, try to useย emphasis. For example, too glossy? use "(glossy:0.8)", or less, or remove it from the prompt, or add it to the negative. Want more, use values 1.1-1.4, then additional descriptors in the same direction.

  3. Use variations - use the same seed, and to the right of the seed check "Extra". Set "Variation strength" to a low value of 0.05, generate a few images, and watch how big the changes are. Increase if you want more changes, and reduce if you want fewer changes. That way you can generate a huge amount of images that are very similar to the original, but some of them will be even better.

๐Ÿ“Œ Recommendations to improve your results:

  1. Use VAE for better colors and details. You can use VAE that comes with the model or download "vae-ft-mse-840000-ema-pruned from (https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) , ckpt or safetensors file into "stable-diffusion-webui/models/VAE" folder. In the settings find "SD VAE", refresh it, and select "vae-ft-mse-840000-ema-pruned"(or the version included with the model). Click "Apply settings" button on the top. The VAE that comes with the model is "vae-ft-mse-840000-ema-pruned", you don't need both, use the one that you downloaded, it will work very well with most of the other models too.

  2. Use hires-fix, SwinIR_4x / 4x-UltraSharp / 4x-AnimeSharp / RealESRGAN_x4plus_anime_6B (Upscaler Download), first pass around 512x512, second above 960x960, and keep the ratio between the two passes the same if possible.

  3. Use negatives, but not too much. Add them when you see something you don't like.

  4. Use CFG 7.5 or lower, with heavy prompts, that use many emphases and are long, you can go as low as 3.5. And generally try to minimize the usage of emphasis, you can just put the more important things at the begging of the prompt. If everything is important, just don't use emphasis at all.

  5. Make changes cautiously, changes made at the beginning of the prompt have more influence. So every concept can throw your results drastically.

  6. Read and use the manual (https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features).

  7. Learn from others, copy prompts from images that look good, and play with them.

  8. DPM++ 2M Karras is the sampler of choice for many people, including me. 40 steps are plenty, and I usually use 20.

  9. Discord server for help, sharing, show-offs, experiments, and challenges.