Sign In

Outfit To Outfit - ControlNet (outfit2outfit)

536
5.2k
139
Updated: Apr 27, 2024
clothing
Type
Controlnet
Stats
891
Reviews
Published
Apr 26, 2024
Base Model
SD 1.5
Training
Steps: 582,000
Hash
AutoV2
6AEC1DE323
default creator card background decoration
First Birthday Badge
EmmyJ_'s Avatar
EmmyJ_

This is a ControlNet model! This model requires ControlNet!

Model Details

This model aims to allow users to modify what a subject is wearing in a given image while keeping the subject, background and pose consistent.

I've produced good results in txt2img, img2img as well as inpainting.
I've produced good results with images generated with Stable Diffusion as well as with pictures I've taken myself.

Installation

Place the .safetensors file into ControlNet's 'models' directory. To use the model, select the 'outfitToOutfit' model under ControlNet Model with 'none' selected under Preprocessor.

Tips for use

  • Images with a clearly defined subject tend to work better.

  • This model tends to work best at lower resolutions (close to 512px)

    • If you run into trouble at higher resolutions, try running a first pass at a lower resolution and then using img2img (or txt2img w/ Hires.fix) with a lower denoising strength to upscale to a higher resolution while continuing to use your original input image as input to this ControlNet model

  • I recommend starting with CFG 2 or 3 when using ControlNet weight 1

    • Higher CFG values when combined with high ControlNet weight can lead to burnt looking images.

    • Experiment with ControlNet Control Weights 0.4, 0.45, 0.5, 0.6, 0.8 and 1.

      • Lower weight allows for more changes, higher weight tries to keep the output similar to the input

      • Anything below 0.5 seems to rely more on the Stable Diffusion model whereas anything 0.5 and up seems to weight the ControlNet model more heavily

  • When using img2img or inpainting, I recommend starting with 1 denoising strength

    • Experiment with 0.75 denoising strength

  • When inpainting, I recommend trying "latent nothing" under Masked content

  • Consider lowering the model's weight when generating higher resolution images

    • The higher the resolution of the output image, the more difficult it tends to be to alter the content of the image from the input image

  • If the output isn't changing enough from the input, try increasing the weight of the prompts or decreasing the Control Weight of the ControlNet Unit

  • Can work well with other models such as OpenPose ControlNet