Sign In

Foot/Shoe Segmentation - ADetailer

Verified:

PickleTensor

Type

Detection

Stats

92

0

Reviews

Published

Apr 1, 2026

Base Model

Other

Training

Epochs: 100

Hash

AutoV2
FB8D3FE304
Third Birthday Badge
Pitpe11's Avatar

Pitpe11

Foot and Shoe Segmentation for ADetailer

Credits to mnemic for this article and Anzhc for this ADetailer model (see for more information)

Installation:

  1. Download the model

  2. Place the model into your ADetailer model folder

    • For the WebUIs like Auto1111, Forge and SD.next it should be in models/adetailer

    • For ComfyUI it should be in models/ultralytics/segm

  3. Start it up (full restart if already had it open) and select the model from the ADetailer tab/node (tip for webui: you can go to Settings→Adetailer and change the number of models that can be used up to 15, very useful if you want to use more than 2 ADetailer models)

  4. Make sure to have correct Detection thresholds/inpaint strength and so on for your images

  5. Generate the stompers!

Notes: Should work on both realistic and anime styled images. The model was trained on about 1.2k images and YOLO11n as a base. It contains 2 classes, feet and shoes, so you can set it up as technically 2 detailer nodes in Comfy, each using a different class and different settings. I don't think that's possible with the Adetailer extension on Webui. Since the dataset is 1/10th of the other detailer models I trained, it's not strong enough to detect every single type of shoe. Feet I'm pretty sure it will do well in most cases since feet look the same (tho it might think feet = hands as well, but oh well)

Extra: Yes, that .pt file will likely show up as "infected" on various scanners. Since this requires a lengthy explanation, please only download from Civitai and use only the official extensions and nodes. If you encounter the model elsewhere, please consider it potentially infected. There is no way to export this model into .safetensors, that's just not how it works. There are other options, but none of you would be able to use them without messy workarounds, if it's even possible.

By the training metrics, this model should not perform well in theory, and I'll count this release as an early test before I add more to the dataset. Please tell me what you think. Thanks! 😊