Updated: Dec 4, 2024
clothingVerified: 7 months ago
Other
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
Hey everyone! Newbie ComfyUI user here. I struggled to find a good inpainting workflow for automatically masking and changing clothes, so after a lot of trial and error, here’s what I came up with. It's not perfect, but it works surprisingly well for me, and hopefully, it’ll be useful to you too.
This workflow focuses on making image editing a bit more streamlined. It uses automatic segmentation to identify and mask elements like clothing and fashion accessories. Then it uses ControlNet to maintain image structure and a custom inpainting technique (based on Fooocus inpaint) to seamlessly replace or modify parts of the image (in the SDXL version).
Here’s a breakdown of the process:
Automatic Masking: Uses semantic segmentation to automatically create masks for clothes and fashion elements.
Image Preparation: Crops and prepares the image for editing.
Structure Preservation: Employs ControlNet to maintain image structure (in the SDXL version, Flux didn't need that in my testing).
Fooocus-based Inpainting: Applies inpainting techniques adapted from Fooocus (SDXL).
Final Assembly: Stitches the edited image back together.
I hope this helps anyone facing similar challenges. Feel free to modify and improve it!
Workflows:
This page contains three workflow variations:
SDXL: The primary workflow. Uses ControlNet for structure and Fooocus-based inpainting (In my opinion, offers the best balance of speed and quality).
Flux Fill: A workflow that uses the new Flux Fill model. Does not require ControlNet to my testing.
Flux Fill GGUF: Similar to Flux Fill but utilizes the GGUF model format for potential performance benefits.
Getting Started:
You'll need to install the following custom nodes and models:
1. Custom Nodes:
The necessary nodes can be found through the ComfyUI Manager. However, some users have reported installation issues regarding the fashion masking nodes. Here's a guide:
Nodes Repository: https://github.com/StartHua/Comfyui_segformer_b2_clothes
Installation:
Install the nodes via ComfyUI Manager.
Navigate to your ComfyUI custom nodes directory: \ComfyUI\custom_nodes\Comfyui_segformer_b2_clothes
Open a command prompt in that directory (you can type cmd in the folder path and press enter).
Run the following command: pip install -r requirements.txt
2. Segmentation Models:
You'll need the model files from Hugging Face (links below). These links only contain the files needed tu run the nodes, not the nodes themselves. Download the model.safetensor, preprocessor_config.json, and config.json files and place them in the following directories:
Segformer B2 Clothes:
Hugging Face Link: https://huggingface.co/mattmdjaga/segformer_b2_clothes
Place files in: \ComfyUI\models\segformer_b2_clothes
Segformer B3 Fashion:
Hugging Face Link: https://huggingface.co/sayeed99/segformer-b3-fashion
Place files in: \ComfyUI\models\segformer_b3_fashion
(The workflow includes a switch to select between these two segmentation models. They have different strengths and weaknesses, so try both if one doesn't work well. Remember to adjust the mask expansion as needed.)
3. Fooocus Inpaint Models:
Hugging Face Link: https://huggingface.co/lllyasviel/fooocus_inpaint (You need inpaint_v26.fooocus.patch, fooocus_lama.safetensors and fooocus_inpaint_head.pth from this repository, place them in ComfyUI\models\inpaint)
Feel free to ask if you have any questions. Happy inpainting!