Sign In

ComfyUI Workflow for Segmented Style Transfers

9

217

7

Type

Workflows

Stats

217

0

Reviews

Published

Jun 18, 2024

Base Model

SDXL 1.0

Hash

AutoV2
1F5FEA4423

If you're looking for a more efficient way to change outfits in ComfyUI, this workflow is worth exploring. It leverages the power of IPAdapter, Grounding Dino, and Segment Anything models to transfer styles and segment objects with precision.

Workflow Overview

The workflow consists of three main groups:

  1. Basic Workflow: Sets up the foundation for the entire process using an inpainting checkpoint and a good SDXL checkpoint.

  2. IPAdapter: Transfers styles from a reference image to the target image using the CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and ip-adapter-plus_sdxl_vit-h.safetensors models.

  3. Segmentation: Utilizes the Grounding Dino model to segment specific objects within an image using a textual prompt.

How to Use This Workflow

This workflow is suitable for creating virtual try-on experiences, batch processing images, or experimenting with different styles and objects. To get started, simply set up the nodes as described above, and input your desired image and style reference. You can adjust the settings and parameters to achieve the desired outcome.

Additional Resources

If you're interested in learning more about this workflow, check out the video tutorial on Prompting Pixels.

I hope this version provides a good balance of context and straightforwardness! Let me know if you need any further adjustments.