Increasingly complex mix I made for use in my own works. Version 1.41 is capable of some of my favorite gens so far,
and yet it pays the price for this in a bit of instability. The model can give great results still, but sometimes higher denoising can wash out details. If you run into issues with backgrounds washing out, or details being removed, try switching back to 1.3 for your img2img/inpainting pass. Instability seems to have been only on my end, but leaving the instructions just in case. I'll be working on trying to fix this for 1.5 The recipe details are published below, and the result is a model that is still stylized, but intricately detailed and able to represent cultures from around the world. I've noticed in doing characters that have darker skin, I don't have to fight stable diffusion anywhere near as much as I used to. It's very versatile and finally feels like its more then the sum of its parts. I use it for concept art, and I feel it fills that role perfectly. Also yes, the model can do NSFW.
1.0: 55% MeinaMixV9 : 45% Aurora
1.1: MothMix v1.0 + 0.3 NeverEnding Dream
1.2 (unpublished): MothMix v1.1 + 0.15 GeminiX_Mix
1.21: MothMix v1.2 + 0.15 MeinaMixV9
1.3: MothMix v1.21 + 0.15 Utopia
1.4 (unpublished): MothMix 1.3 + 0.15 ReV Animated
1.41: MothMix 1.4 + 0.05 23526-1655-下午好
In the interest of honesty I will disclose that many of these pictures here have been cherry picked, hand-edited and re-generated. That's because the majority are working pieces of concept art for a story I'm working on. I don't want to give the impression that this model can't give good first-generation images - it can - but when using it you should just kind of expect to have to hi-res fix and/or inpaint at least once. None of the pictures posted here use a Lora for style, but as part of the typical prompt I use they do use EasyNegative and BadHandV4 at usually reduced weights (0.6 and 0.4 respectively). Recently I started switching to just using BadDream exclusively, as I find it can give a bit more stability sometimes. The ones that do use Lora are only for character specific features (Moth Girls, Obsidian Skin, Black Sclera, etc.). If you find you're still having issues with characters with dark skin, look into DuskFallCrew's Dark Fantasy Lora, its made for creating dark skinned characters in anime models. All images were post-process upscaled with 4x_NMKD-Superscale-SP_178000_G to give a painted look. If you're wondering why all my pictures have extremely high CFG scales, its because I use Dynamic Thresholding.
If you're doing anything where you want a very specific result based on prompts alone, I highly recommend it. After testing, high CFG + DT didn't show much improvement over low CFG (~7), so YMMV. The parameters I use for all images are:
Mimic CFG Scale - 7 (rarely 3)
Mimic/CFG Scale Scheduler - Half Cosine Up
Min. Value of Mimic/CFG Scale Scheduler - 3
Below are recommended gen info based on the stuff I typically use, combined with legal stuff inherited from its composite models, who thankfully use the same license parameters. This mix uses CreativeML OpenRAIL-M as the two previous models do, and since Meina disallows the use of the model on generation services or paid services without her consent, this model is similarly banned for use in similar contexts.
Enable Quantization in K samplers.
Hires.fix in some form is needed for high quality images. You can either use the t2i hires fix, which I'd recommend running at 1.5-2x scale at 0.3 denoising for 15 steps. I do this manually by running the best finished images through img2img
Sampler: DPM++ 2M Karras: 25-35 steps.
CFG Scale: 7+. See note on Dynamic Thresholding above.
-Default: 512x512 txt2img -> 1024x1024 img2img, 0.3-0.65 Denoising
-512x768, 512x1024 for Portrait
-768x432, 1024x576 for Landscape (16:9)
Hires.fix: 4x_NMKD-Superscale-SP_178000_G or 4x_fatal_Anime_500000_G, with 15 steps at ~0.3 denoising.
Clip Skip: 1 or 2.
Negatives: ' (worst quality:2, low quality:2), (zombie, sketch, interlocked fingers, comic), '
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
The use of this learning model is entirely at the discretion of the user, and they have the freedom to choose whether or not to create NSFW content. This is important to note that the model itself does not contain any explicit or inappropriate imagery that can be easily accessed with a single click. The purpose of sharing this model is not to showcase obscene material in a public forum, but rather to provide a tool for users to utilize as they see fit. The decision of whether to engage with SFW or NSFW content lies with the user and their own personal preferences.