santa hat
deerdeer nosedeer glow
Sign In

Underpainting Method | Stable Diffusion 1.5

Underpainting Method | Stable Diffusion 1.5

About

This is a short write up on a method for generating images that are a little more interesting, dramatic and dynamic than your average Txt2Img results.

One thing needs to be said beforehand, is that results are a lot less predictable and a lot more abstract. Generally, you will want to try this, if you just like to produce pretty pictures, rather than need an exact render of what's on your mind. It's just fun.

I'm using Stable Diffusion 1.5, but it might work with Stable Diffusion XL - I wouldn't know, I never tried it.

Step 1

Grab dustiest oldest Stable Diffusion model in your arsenal! I personally use Stable Diffusion 1.1. We need it to have less coherence and more randomness.

Step 2

Now generate some wild and cookie images for our 'underpainting'. Use most random and abstract keywords you know, while also keeping in mind the end result you desire. Actual contents of images absolutely don't matter, they can be ugly, anatomically inaccurate and colorful - anything goes. Making prompt lengthy also helps - when Stable Diffusion cannot combine all keywords in one image, it will pick them more or less randomly. So, to sum it up:

  • Abstract keywords

  • A lot of keywords

  • Result doesn't matter but should vaguely represent what you intend to create.

Here my prompt example (I'm going for water magic themed handsome guy):

elegant fit shirtless man mage casting water explosion spell, magic, water butterflies, spiral composition, circular composition, water explosion, splashes, flowers made of water, circle, hyperrealistic art, fantasy concept art, fantasy art, abstract, surreal, extravagant, vivid, vibrant, detailed art, flowing water, water magic, bubbles, drops, splashes, waves, colorful, dramatic, dynamic, character concept art

Here are some of the images this produces:

(hideous, I know)

Step 3

Generate the final image:

  • Grab your best, most consistent and coherent model (I used Airfuck's Wild Mix).

  • Go to Img2Img.

  • Input one of the 'underpainting' images.

  • Leave the Prompt blank.

  • Use the best Negative Prompt you've got.

  • Pick your favorite sampler and set denoising strength to a lowish number (I use DPM++ 2M Karras, 30 Steps and Denoising at 0.4).

  • And let Stable Diffusion make sense of the abstract non-sense image!

  • ...

  • Profit?

Surprisingly enough results end up very interesting and almost meaningful.

In short, what you're doing, is creating a unique starting noise for Stable Diffusion to work with. And instead of most averaged out, boring images that Stable Diffusion normally goes for, you will get cool looking, artsy and dynamic pose images. Even human faces are a lot less likely to be something everyone and their grandma recognize as AI by now.

You can work on these further, but running denoising multiple times, or inputting keywords into Positive prompt to lead Stable Diffusion towards results you desire.

Good luck and have fun!

Edit: It was possible to look up from image file Info, but I thought I'd clarify in words:

  • 'Underpainting' image CFG value was set to 4

  • Final image CFG value is default 7

22

Comments