A lot of people have been asking me how I generate my images, so I decided to put together this guide. Not only will it help those of you who are curious, but it also saves me from having to repeat myself! So, here’s how I do it:
ReForge
First off, I use ReForge to make my images. ReForge is a fork of Forge, which itself is a fork of Automatic1111. It's essentially a better version of Automatic1111 because it generates images much faster, especially on low-end hardware. If you're currently using Automatic1111, I highly recommend switching to ReForge—it's totally worth it. Even if you have a high-end rig or graphics card, ReForge offers noticeable improvements.
Why does the program matter? Because ReForge has MultiDiffusion, which is key to how I generate my images.
Resolution
I usually generate images with a resolution of 832x1216. While there are plenty of other resolutions you can use, I find this one works best for me. Some other good options include 640x1600 and 1024x1024. You can also swap the width and height if you prefer horizontal images. These options should mostly maintain the same quality.
Basic Prompting
My prompts are pretty straightforward—I’m not the most creative when it comes to these. I’ve found that keeping the prompts simple makes it easier to tweak things if I’m not happy with the results. There's really no need to overload them with quality tags, as they don't make a big difference in the outcome. I also use very few negative prompts, especially with certain subjects, because they can reduce the quality of the images.
Here's how I start my prompts:
score_9, score_7_up, <MODEL>, 1girl, solo, CHARACTERTAG, presenting
I stick to "score_9" and "score_7_up" because using others can degrade the quality of the model I'm using. For instance, using "score_8_up" might give you better quality images, but it can make the style harder to replicate, especially with style-specific models.
You might notice I sometimes use something like (CHARACTERTAG):0.8 in my prompts. This depends on the character and the style model I’m using. For example, using a tag like helen parr usually works well but might lean towards a specific style. I assume that helen parr had a lot of 3d images or a lot of images from a specific artist were used. By adjusting the tag strength, I can better match my desired style. When a character tag has the name of the series it is from, for example widowmaker \(overwatch\) I tend to make this a lower value like 0.6. However this all depends on how flexible the character tag is.
A quick heads-up: if you lower the strength too much, you might end up with incomplete images or weird distortions. This happens because certain models don’t handle lower strengths as well as others, like SD 1.5. If you run into this issue, just try increasing the strength a bit.
As for why I use "presenting" in my prompts, I simply like the results better. It tends to create more dynamic and interesting poses than what you'd get without it.
ADetailer
Next up is ADetailer. This tool is fantastic for adding detail to specific areas, like faces. There are various versions available on Civitai, but I prefer using this one for faces: Adetailer Face finder
ADetailer acts like a mini in-painter, adding details without you having to do all the nitty-gritty work yourself. Based on the model you’ve selected, it will pick out the area and in-paint it to enhance quality. I usually stick to the basic settings, as they work well for my needs, but feel free to experiment with the settings if you want.
Upscaling
For upscaling, I use an upscaler called RealSR DF2K. While other upscalers work fine, this one is my personal favorite.
My approach to upscaling is a bit different from the usual method of just using the hires fix and upscaling by 2. That’s a valid method, but I’ve found a better way to get even higher quality. This method works best with ReForge (and also with Forge, though I still recommend using ReForge).
My method is from this guide that I found that talks about upscaling. However I do a few things differently.
Here’s what I do:
Generate the image you want to upscale, then click the "Reuse Seed" button (the recycle button).
Below the generated image, click the "Send to Image Generation" button (the painting button) to keep the original metadata.
Go back to the txt2img tab and click "Send to Extras" (the set square ruler/45-degree triangle button).
In the extras tab, set your image to upscale. I typically upscale by 2, but you might need to go lower depending on your setup. Set Resize = 2, and choose your upscaler (I use DF2K). Hit generate and wait a bit.
Once done, hit the "Send to Image Generation" button (the painting button) under the upscaled image.
Go to the img2img tab, scroll to Denoising Strength, and set it to 0.2. If the upscaling seems too harsh, you can lower it even more.
Scroll down to MultiDiffusion Integrated, enable it, and select Mixture of Diffusers. Set the tile width and height to match your original image resolution (832x1216 for me). I like a Tile Overlap of 64 and a Tile Batch Size of 4, but these can be adjusted.
After making sure everything is correct, including the prompt and seed, hit Generate, and you should have a beautifully upscaled image.
Here is a video for those confused
InPaint
Lastly, I sometimes use inpainting to remove watermarks or other unwanted elements. After generating your original image (not upscaled), click on the "Send to Inpaint" button (the painter’s palette button). Draw over what you want to remove and set denoising strength to 0.8 - 1. I use 1 for watermarks, but if it's over an object you want to keep, a lower value works better. Once the inpainting is done, send it to upscale and follow the steps mentioned above.
And that’s pretty much my process! If you have any suggestions or need more help, feel free to leave a comment.