Sign In

[FLUX | SDXL] Auto clothes inpainting

49

1.3k

24

Updated: Dec 4, 2024

clothing

Type

Workflows

Stats

737

0

Reviews

Published

Dec 3, 2024

Base Model

Flux.1 D

Hash

AutoV2
03F25EF569
default creator card background decoration
NO

Nopha_

The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Hey everyone! Newbie ComfyUI user here. I struggled to find a good inpainting workflow for automatically masking and changing clothes, so after a lot of trial and error, here’s what I came up with. It's not perfect, but it works surprisingly well for me, and hopefully, it’ll be useful to you too.

This workflow focuses on making image editing a bit more streamlined. It uses automatic segmentation to identify and mask elements like clothing and fashion accessories. Then it uses ControlNet to maintain image structure and a custom inpainting technique (based on Fooocus inpaint) to seamlessly replace or modify parts of the image (in the SDXL version).

Here’s a breakdown of the process:

  • Automatic Masking: Uses semantic segmentation to automatically create masks for clothes and fashion elements.

  • Image Preparation: Crops and prepares the image for editing.

  • Structure Preservation: Employs ControlNet to maintain image structure (in the SDXL version, Flux didn't need that in my testing).

  • Fooocus-based Inpainting: Applies inpainting techniques adapted from Fooocus (SDXL).

  • Final Assembly: Stitches the edited image back together.

I hope this helps anyone facing similar challenges. Feel free to modify and improve it!

Workflows:

This page contains three workflow variations:

  1. SDXL: The primary workflow. Uses ControlNet for structure and Fooocus-based inpainting (In my opinion, offers the best balance of speed and quality).

  2. Flux Fill: A workflow that uses the new Flux Fill model. Does not require ControlNet to my testing.

  3. Flux Fill GGUF: Similar to Flux Fill but utilizes the GGUF model format for potential performance benefits.

Getting Started:

You'll need to install the following custom nodes and models:

1. Custom Nodes:

The necessary nodes can be found through the ComfyUI Manager. However, some users have reported installation issues regarding the fashion masking nodes. Here's a guide:

  • Nodes Repository: https://github.com/StartHua/Comfyui_segformer_b2_clothes

  • Installation:

    1. Install the nodes via ComfyUI Manager.

    2. Navigate to your ComfyUI custom nodes directory: \ComfyUI\custom_nodes\Comfyui_segformer_b2_clothes

    3. Open a command prompt in that directory (you can type cmd in the folder path and press enter).

    4. Run the following command: pip install -r requirements.txt

2. Segmentation Models:

You'll need the model files from Hugging Face (links below). These links only contain the files needed tu run the nodes, not the nodes themselves. Download the model.safetensor, preprocessor_config.json, and config.json files and place them in the following directories:

3. Fooocus Inpaint Models:

Feel free to ask if you have any questions. Happy inpainting!