Sign In

Chef's Matrix Chair

91
425
10
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
425
Reviews
Published
Aug 13, 2023
Base Model
SD 1.5
Usage Tips
Clip Skip: 1
Hash
AutoV2
ED28E8C57A

Greetings chefs!

Click “show more” below to read the full guide and tips for using this model.

I am Promptnanimous and Chef’s Matrix Chair is my first mix published anywhere.

I have been meticulously testing this mix for a month to find its secrets, and it is now time to share them with you! I hope you have fun with it and please share your discoveries with us!

If you enjoy my guide and my model, please consider following me, sharing the model with your friends, and buying me a ko-fi. I will be posting additional models, guides, and sample images in the future!

Samples for Chef’s Matrix Chair

My sample images are (almost) all made with txt2img and hires. fix without lora/lycoris/loha and control net. This is to give you the most accurate impression of the model’s capabilities.

Below are this model’s attributes based on my observations after generating over 2k images with it across a variety of mediums, subjects & styles. You might find that it works better or worse than what I’ve listed here. Either way I hope you’ll let me know so I can learn and adjust!

Strengths:

  • People

  • Skin detail (with some help from specific prompting - check example images and select negative TIs the CyberRealistic Negative in combination with the SkinPerfection Negative v1.5)

  • Decades (90s, 70s, etc)

  • Artists’ styles

  • Style blending

  • Semi-realism

  • Mediums (photo, cinestill 800, poster, hand-drawn illustration, cartoon, CGI, etc)

  • Angles (from above, full body, portrait, etc)

  • Beautiful women (some of you won’t be bored)

Does Ok:

  • Realism

  • Danbooru tokens

  • Stickers (with proper prompting - see example images)

  • Facial feature diversity

  • Poses (you can always use controlnet for better results for this)

  • Style of some movies and TV shows (see known examples in sample images)

  • Anime

  • Interiors & furniture (sometimes good, sometimes not)

Weaknesses:

  • Hands (you can find a prompt you like, then splash in the good hands beta lycoris)

  • Holding objects

  • Animals (seems hit or miss, something like “dog” doesn’t work but “great dane” does)

  • True photorealism - there are several wonderful models that do photorealism much better. This one can do some cool things but I won't claim it's the best!

  • Compatibility with some anime / drawn loras (it’s not completely incompatible but you might get some issues)

  • Sometimes people will have weird lines or marks on their face depending on the prompt, I like to use adetailer to help get rid of that shit

  • NSFW acts (you may need to use a specialty lora or embedding but in any case YMMV)

Make sure to see the recommended resources, and look for the example images to see how I have used them.

If you are having issues getting similar results as me, please try the following things:

  1. Make sure your settings are identical to what is in the metadata for the image. This includes ensuring you are using the same VAE, Clip Skip, Upscaler, Denoising Strength and Token Combination settings.

  2. Make sure you aren’t accidentally using any extra extensions / add-ons. Ensure controlnet isn’t active, etc.

  3. If you are trying to keep a “style” while changing some of the details like the setting or character, try not to alter the order of the prompt tokens too much. You can get similar looking images with different character and setting details while preserving style through minimal editing of the prompt.

  4. If the face is the primary difference you may have chosen an image where I used adetailer, so make sure to enable that (or install it first if you haven’t) and then copy the additional adetailer settings which should be in the image metatdata.

  5. Note that I use xformers which means my gens are non-deterministic for some small details even when using the same seed. If your images are nearly identical except for very small details, this is the explanation. Nothing can be done about this, it’s just the way xformers works.

  6. If none of the above helps, send me a message and I’ll do my best to help.

Frequently used generation settings

For a good balance of speed and quality, I use these setting for iterating on new prompt ideas quickly

  • VAE: vae-ft-mse-840000-ema-pruned.safetensors

  • Clip Skip: 1

  • Sampler: UniPC

  • Steps: 40

  • Height: 512

  • Width: 768

  • CFG: anywhere from 6 to 8

  • Hires. Fix: Yes (though this is optionally “no” if you want to go faster while sacrificing clarity of detail)

  • Hires Steps: 20

  • Denoising Strength: 0.45 - 0.55 (depends on how impatient I am - set lower for slightly faster)

  • Upscale by: 1.5

  • Upscaler: Latent (bicubic antialiased) OR 4x_fatal_anime_ 500000_G OR 4x_foolhardy_Remacri

  • Token merging ratio: 0.5

For maxed quality, but slow (I use this after finding a pretty good batch of results with the above settings)

  • VAE: vae-ft-mse-840000-ema-pruned.safetensors

  • Clip Skip: 1

  • Sampler: DPM++ SDE Karras

  • Steps: 25

  • Height: 512

  • Width: 768

  • CFG: anywhere from 6 to 8 (some prompts can go higher for more stunning effect without unwanted artifacts)

  • Hires. Fix: Yes

  • Hires Steps: 20

  • Denoising Strength: 0.45-0.55 (closer to 0.45 can give a “softer” look, 0.55 is “sharper”)

  • Upscale by: 2

  • Upscaler: Latent (bicubic antialiased) OR 4x_fatal_anime_ 500000_G OR 4x_foolhardy_Remacri

  • Token merging ratio: 0.5

Recommended Areas for Experimentation

I did not spend much time experimenting with the below items. You might find some interesting or surprising results by trying some new things.

  • VAEs other than vae-ft-mse-840000-ema-pruned.safetensors

  • Samplers other than UniPC, DPM++ SDE Karras and DPM++ 2M Karras

  • Character and Action LORAs

  • Wider CFG scale (good results here will also depend on your choice of sampler and steps)

  • Clip skip 2 (or anything other than 1)

Random Tips

As mentioned above, I will use different settings for iterating vs generating batches of images with high quality. There is a tradeoff in speed and quality. Slow experimentation is bad, and I don’t mind waiting for batches to generate if I figure more than 50% of them will be what I’m looking for.

I will also sometimes add in the good hands beta lycoris, and sometimes the detail tweaker lora if I want that. Since loras slow down generation speed, I try not to use them while iterating unless I am testing out the capabilities of a specific lora.

Negative TIs are generally not necessary for good gens with this model, but I choose to use some of them quite often, and they can make some really excellent stuff. I love using the CyberRealistic Negative in combination with the SkinPerfection Negative v1.5 when pushing gens towards photorealism with people in them. There are also several other negative TIs that I use in various combinations including verybadimagenegative v1.3, bad-hands-5, aid28, badv5, deformityv6, bad_pictures, bad-picture-chill-75v, and perhaps a few more I left off by mistake.

If you see some negative TIs that contain the characters "en_" these are from a set of custom negative TIs that have not been released yet. If there is enough demand for them I will try to convince the creator to publish them, or I might do it on his behalf.

Use “greyscale” in the negative with different attentions to control color.

Use “symmetry” in the negative for some more interesting results. I like to set attention to 1.3.

Use “plump” in the neg or pos to control the weight of your subject.

Facial features -  if you are getting gens with faces that all look the same, chances are you are using one or more tokens that are influencing how the face looks despite not realizing it. There’s not a lot you can do about this except for spending a ton of time finding out what’s “locking in” the facial features, and then maybe delaying those tokens using prompt editing - something like [token:0.3]

The above is also true for loras (not the bit about prompt editing though that won’t work). Sometimes loras can influence other aspects of your results beyond what it is intended to do. If you are getting unintended qualities in your gens, and prompting in the negative isn’t helping, it might be the lora you’re using. Depending on the lora, there may not be anything you can do to “fix” the unwanted characteristics.

Try to stick with a lower number of tokens in your prompts. It’s not a requirement, but it can help. You can get really cool results with a lot of tokens too, it is just more challenging to balance, and your results can change dramatically & in unintended ways the more tokens you have due to the chunking logic that sends tokens in batches of 75 to unet.

About Me

I have used StableDiffusion v1.5 models and Automatic1111 daily for about 8 months, creating over 50k images during that time and doing my best to learn about prompting techniques & settings through quick iterations.

My niche area of focus is in trying to get the best results out of models using only txt2img & hires. fix without other techniques such as img2img & inpainting. I enjoy the simplicity and efficiency that can be achieved by finding great settings that result in quality images. I also try to avoid loras since they slow down generation, but I will use them to get a specific style, or the fixer loras like good hands beta and detail tweaker.

If you enjoyed my guide and my model, please consider following me, sharing the model with your friends, and buying me a ko-fi.

I will be posting additional models, guides, and sample images in the future!