VERSION 2.0 OUT NOW!!!
This started out as a simple idea of going back to the drawing board and re-creating DNW from scratch and seeing what would come from it. With the Initial intention of releasing as 2.5 or 3rd version of DNW.
What came out of my endeavour was so far ahead, and so far removed from any of the original models that I decided it deserved to become its own separate model!
Through the process of constantly learning, my work has been evolving with it as well. This model is the culmination of all my experience with creating models.
ALL THE MAIN EXAMPLES ARE 1:1 FOLLOWING MY INSTRUCTIONS BELOW YOU SHOULD BE ABLE TO REPLICATE THEM EASILY!
(The famous people seen in the examples were previously trained into the models and are not done with LORAS. Very little inpainting to no inpainting done in examples & only eyes.)
THERE IS NOW AN INPAINT MODEL AS WELL TO FIT ALL YOUR NEEDS! ENJOY!
This model can do pretty much any style fantasy art at a very high level. The main concepts:
✔️ concept art
✔️ oil painting
✔️ photorealistic / realistic
✔️ analog style
✔️ fantasy art
(all of the above list can & should be used as triggers and weights)
It knows most D&D races and it will produce some interesting results when prompted correctly.
It’s hard to nail this model down with just a simple description. But I can tell you that I’ve tested it for some time now and it produces results that are as good or equal to some of the absolute best models on this site.
I will add more information in the coming days. For now… Create, Explore, & Enjoy the Adventure 💗
There is no embedded VAE but it was designed to be used with this:
I recommend using the following negatives:
EasyNegative or EasyNegativeV2 (The all around catch all. Always good in a bind)
badhandv4_anime (absolutely your go to neg for anime hands)
bad-hands-5 (good for the more realistic images but will not work amazingly for anime)
bad-artist \ bad-artist-anime (I’ve had good and bad luck)
bad_prompt \ bad_prompt_version2 (works very good sometimes)
(I typically use (bad-hands-5) & (bandhandv4_anime) at the same time to cover more bases)
Note: There is a full & pruned version of this model available. This is a complex merge so even the pruned is large as well.
(Here are some example settings for you to start with, but please keep in mind YMMV!)
Steps: 30 = Sweet Spot / 15-150
Sampler: DPM++ 2M Karras, DPM++ SDE Karras, Euler A, DDIM,
CFG scale: 7-9.5 = Sweet Spot / 4.5-15
Size: 512x768, 768x512, 640x840, 840x640, 600x896, 896x600, (any size)
Clip skip: 1
Hires Fix: Enabled
Upscale by: 1.5-2
Hires Steps: Equal to Half or Exact the exact amount of Steps (or higher)
Upscaler: Remacri Upscaler, Lollypop Upscaler, 4x_NMKD_DigArt_10k|
( I use DigArt 10k almost exclusively with everything except for illustration & animation )
Copy generation data from the image and paste it into your respective prompts in A1111.
Copy ALL the information this includes the resolution, hires fix, denoise, etc. Otherwise your end results may be different.
Make sure you download and or already have any of the negative embeddings mentioned above in the previous section.
Once you download them put them into their respective folders, refresh and or restart A1111.
Now you should be good to go... Just hit generate, that simple!
Now.. onto how to fix eyes, faces & hands!
Blurred eyes, or sometimes warped faces are not that uncommon in SD and sometimes require an inpaint to pull off. Most of my examples are not inpainted. But a few are.
First off when you run txt2img ALWAYS use HIRES FIX!
latent (nearest or exact) upscale / denoise 0.4-0.7 (Complete variable - depends on the type and complexity of the image. YMMV!)
Upscale by 1.5-2 but no more than that.
This will get you a pretty good base image to work with.
Afterwards… You can always send your txt2img picture to inpaint on the bottom right under the result area > select the face or eyes > lower the denoise strength to 0.5-0.57. > make sure your seed is set to random.
masked content = fill / inpaint area = whole picture ====== for faces
masked content = fill / inpaint area = only masked ====== for eyes
With “whole picture” selected for larger inpaint target aka faces (or hands) you may need the denoise to be lower to pick up more of the original image. 0.53-0.56
Now with “only masked” and a smaller target like eyes you get a little more freedom so your denoise can be 0.54-0.57
Run a batch of 4-5 (or more) for the faces… go through pick the best one… if you’re lucky you got the eyes and face in one go… if not you may need to choose the best face… send back to inpaint again… select just the eyes… and rerun a new batch (changing it to only masked and the settings described above for eyes.)
Finally when you get it all together send to extras:
Resize to 1.5-2
(Put these in your \A1111-Web-UI-Autoinstaller\stable-diffusion-webui\models\ESRGAN folder)
Upscaler 2 is what will get you the result you want.
Choosing the right upscaler:
If you're doing realism, analog, photorealistic, or realistic 2.5D illustrations or oil paintings choose 4x_NMKD_ DigArt_10k
If you want a good all around upscaler that can do nearly any subject you choose Remacri
If you’re working on toons, anime, animated, line art, illustration or cell shaded you choose Lollypop
Each of these is unique and so is their outputs. If you want to compare them all simply change the pull down and run off the various types and then go to your extras folder and contrast / compare the finished results. I think you will find they are are all fairly similar but they also have their strengths and weaknesses.
For the more advanced upscale: Get yourself Ultimate SD Upscale
(click the link for a basic tutorial on how to get it and how to use it.)
Passes 98% of alexds9’s model test (Tiny bit wonky on the architecture section. But I’d say it passes overall.) You can see a link the test results HERE! The prompt information is HERE!
Warning there is a NSFW section of this testing method so be aware!