🔽DOWNLOAD🔽
https://civitai.com/models/1386234/comfyui-image-workflows
Overview
The archive contains the following workflows:
T2I_V14:
is for creating new images with every possible feature (pose control, style transfer, refiner, etc.).BasicT2I_V14:
is a streamlined text-to-image experience without the advanced IP-Adapter, OpenPose, or Refiner steps, making it faster and simpler to use.Upscaler_V14:
is not for creating new images, but for improving and enlarging images you already have.VPred_V14:
is specifically for models that require v-prediction sampling, which is a different way the model interprets noise during generation.
Requirements
Most of the requirements can be downloaded directly in the ComfyUI Manager.
🟥T2I_V14
🟨BasicT2I_V14
🟩Upscaler_V14
🟦VPred_V14
Custom Nodes:
🟥🟨🟩🟦 ComfyUI-Manager (by Comfy-Org)
https://github.com/Comfy-Org/ComfyUI-Manager🟥🟨🟩🟦 ComfyUI-Impact-Pack (by ltdrdata)
https://github.com/ltdrdata/ComfyUI-Impact-Pack🟥🟨🟩🟦 ComfyUI-Impact-Subpack (by ltdrdata)
https://github.com/ltdrdata/ComfyUI-Impact-Subpack🟥🟨🟩🟦 ComfyUI-Easy-Use (by yolain)
https://github.com/yolain/ComfyUI-Easy-Use🟥🟨🟩🟦 rgthree-comfy (by rgthree)
https://github.com/rgthree/rgthree-comfy🟥🟨🟩🟦 comfy-image-saver (by giriss)⚠Conflicting with ComfyUI_PRNodes
https://github.com/giriss/comfy-image-saver🟥🟨🟩 ComfyUI Essentials (by cubiq)
https://github.com/cubiq/ComfyUI_essentials🟥🟨🟩 ComfyUI_UltimateSDUpscale (by ssitu)
https://github.com/ssitu/ComfyUI_UltimateSDUpscale🟥🟩 pysssss Custom Scripts (by pythongosssss)
https://github.com/pythongosssss/ComfyUI-Custom-Scripts🟥 ComfyUI_IPAdapter_plus (by cubiq)
https://github.com/cubiq/ComfyUI_IPAdapter_plus🟥 ComfyUI_Comfyroll_CustomNodes (by Suzie1)
https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes🟥 ComfyUI's ControlNet Auxiliary Preprocessors (by Fannovel16)
https://github.com/Fannovel16/comfyui_controlnet_aux🟥 z-tipo-extension (by KohakuBlueleaf)
https://github.com/KohakuBlueleaf/z-tipo-extension🟥 comfyui-comfycouple (by asagi4)
https://github.com/Danand/ComfyUI-ComfyCouple🟩 ComfyUI-WD14-Tagger (by pythongosssss)
https://github.com/pythongosssss/ComfyUI-WD14-Tagger
Models Checklist:
You only need to download the models for features you are planning to use e.g. if you don't use the refiner you dont need to download the sd_xl_refiner_1.0 model.
Checkpoints:
🟥🟨🟩🟦 Any SDXL/Pony/Illustrious/NoobAI model 📂/ComfyUI/models/checkpoints
https://civitai.com/models/1203050/fabricated-xl
https://civitai.com/models/827184/wai-nsfw-illustrious-sdxl
https://civitai.com/models/989367/wai-shuffle-noob
https://civitai.com/models/140272/hassaku-xl-illustrious
🟥 SDXL Refiner model 📂/ComfyUI/models/checkpoints
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensorsControlNet:
🟥🟨🟩 diffusers_xl_canny_full.safetensors 📂/ComfyUI/models/controlnet
https://huggingface.co/lllyasviel/sd_control_collection/blob/main/diffusers_xl_canny_full.safetensors
🟥 OpenPoseXL2.safetensors 📂/ComfyUI/models/controlnet/SDXL
https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/blob/main/OpenPoseXL2.safetensorsIP-Adapter:
🟥 ip-adapter-plus_sdxl_vit-h.safetensors 📂/ComfyUI/models/ipadapter
https://huggingface.co/h94/IP-Adapter/blob/main/sdxl_models/ip-adapter-plus_sdxl_vit-h.safetensors
🟥 ip-adapter-faceid-plusv2_sdxl.bin 📂/ComfyUI/models/ipadapter
https://huggingface.co/h94/IP-Adapter-FaceID/blob/main/ip-adapter-faceid-plusv2_sdxl.binLoRA:
🟥 ip-adapter-faceid-plusv2_sdxl_lora.safetensors 📂/ComfyUI/models/loras/ipadapter
https://huggingface.co/h94/IP-Adapter-FaceID/blob/main/ip-adapter-faceid-plusv2_sdxl_lora.safetensorsCLIP Vision:
🟥 CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors 📂/ComfyUI/models/clip_vision
https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/tree/main
🟥 clip_vision_g.safetensors 📂/ComfyUI/models/clip_vision
https://huggingface.co/stabilityai/control-lora/blob/main/revision/clip_vision_g.safetensorsUpscale Models:
🟥🟨🟩 4x_foolhardy_Remacri.pth (or any other 4x ESRGAN model) 📂/ComfyUI/models/upscale_models
https://huggingface.co/FacehugmanIII/4x_foolhardy_RemacriDetectors (YOLO/SEG): Place detector models in ComfyUI/models/ultralytics/bbox or /segm. This workflow requires detectors for:
🟥🟨🟩🟦 Hands (hand_yolov8s.pt) 📂/ComfyUI/models/ultralytics/bbox
https://huggingface.co/Bingsu/adetailer/blob/main/hand_yolov8s.pt🟥🟨🟩🟦 Faces (face_yolov8m.pt) 📂/ComfyUI/models/ultralytics/bbox
https://huggingface.co/Bingsu/adetailer/blob/main/face_yolov8m.pt🟥🟨🟩🟦 Eyes (Eyeful_v2-Paired.pt) 📂/ComfyUI/models/ultralytics/bbox
https://civitai.com/models/178518/eyeful-or-robust-eye-detection-for-adetailer-comfyui🟥🟨🟩🟦 NSFW (ntd11_anime_nsfw_segm_v4_all.pt) 📂/ComfyUI/models/ultralytics/segm
https://civitai.com/models/1313556/anime-nsfw-detectionadetailer-all-in-one
Recommendations

Enable Autocompletion in the settings tab under pysssss. It's also recommended to press Manage Custom Words, load the default tag list and press save.

Also disable Link Visibility for better viewing clearity since it can get quite clustered.
T2I_V14

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable HiResFix) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.
Seed (rgthree): A master seed control for all generation steps. (-1 means random)
Ckpt Select: Select the main SDXL model used for generation.
Width & Height: The base resolution of the generated images.
Recommended Values: [1:1] 1024x1024, [3:4] 896x1152, [5:8] 832x1216, [9:16] 768x1344, [9:21] 640x1536
Scheduler & Sampler: Select the main scheduler and sampler for the generation process.
Steps Base: The steps used for the base image sampler.
Steps Refiner: The steps used for the refinement process if enabled in the Fast Groups Bypasser node.
CFG Value: Guidance setting of how closely the generation follows your prompt.
POSITIVE & NEGATIVE: This where you write your text prompt. You can use certain syntax to manually include embeddings, LoRAs and wildcards. However you can also click the "Click to add LoRA" and "Click to add Wildcard" at the bottom of the node to choose from a list of the available ones.
anime //normal tag
(anime) //equals to a weight of 1.1
((anime)) //equals to a weight of 1.21
(anime:0.5) //equals to a weight of 0.5 (keyword:factor)
[anime:cartoon:0.5] //prompt scheduling [keyword1:keyword2:factor] switches tag at 50%
embedding:Cool_Embedding
(embedding:Cool_Embedding:1.2) //change weight (same as for normal tags)
<lora:Cool_LoRA> //unspecified LoRA weight (default 1.0)
<lora:Cool_LoRA:0.75> //specified LoRA weight 0.75
<lora:Cool_LoRA.safetensors:0.75> //also possible to include the file extension
__coolWildcard__ //use wildcard
__other/otherWildcard__ //wildcard in a sub folder
2. Refiner
The Refiner performs a second diffusion pass on the image using a dedicated refiner model. This step doesn't change the composition but enhances fine details, textures, and overall image sharpness. Enable/Disable the Refiner in the Fast Groups Bypasser node and adjust the number of steps in the Steps Refiner node.
3. HiRes Fix
The HiRes Fix performs an initial, controlled upscale of the generated image (base+refiner) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition. Enable/Disable the HiRes Fix in the Fast Groups Bypasser node.
4. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next. The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw). The bbox threshold is the confidence level the detection model must have before it acts.
HandDetailer➔BodyDetailer➔NSFWDetailer➔FaceDetailer➔EyesDetailer
You might also wanna change the detailing prompt by expanding the Edit DetailerPipe node e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in the Fast Groups Bypasser node.
5. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation. Enable/Disable the Color Match in the Fast Groups Bypasser node.
6. Upscaler
Performs the final, large-scale upscaling with the help of an controlnet after all detailing passes are complete, resulting in a high-resolution final image.
7. ComfyCouple
Applies different prompts to different regions of the image. This is useful for creating scenes with multiple, distinct characters or for specifying a character in one area and a background in another. It takes two positive conditioning inputs. In this workflow, it's set to "horizontal" with a division at 0.5. This means the main POSITIVE prompt applies to the left 50% of the image, and the 2nd Prompt (ComfyCouple) prompt applies to the right 50%.
8. OpenPose
Applies precise and complex character poses using a reference image, overriding the natural posing the model might otherwise choose.
9. IP-Adapter Style & Composition
Transfers the overall aesthetic including color palette, lighting, mood, and compositional elements from a reference image to the generated image.
10. IP-Adapter FaceID
Accurately transfers the facial identity from a reference portrait to the generated character. This is more precise than using a standard IP-Adapter for faces.
11. Clip Vision
Allows the model to "see" and understand an image in a way that's similar to how it understands text.
BasicT2I_V14

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable HiResFix) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.
Seed (rgthree): A master seed control for all generation steps. (-1 means random)
Ckpt Select: Select the main SDXL model used for generation.
Width & Height: The base resolution of the generated images.
Recommended Values: [1:1] 1024x1024, [3:4] 896x1152, [5:8] 832x1216, [9:16] 768x1344, [9:21] 640x1536
Scheduler & Sampler: Select the main scheduler and sampler for the generation process.
Steps Base: The steps used for the base image sampler.
CFG Value: Guidance setting of how closely the generation follows your prompt.
POSITIVE & NEGATIVE: This where you write your text prompt. You can use certain syntax to manually include embeddings, LoRAs and wildcards. However you can also click the "Click to add LoRA" and "Click to add Wildcard" at the bottom of the node to choose from a list of the available ones.
anime //normal tag
(anime) //equals to a weight of 1.1
((anime)) //equals to a weight of 1.21
(anime:0.5) //equals to a weight of 0.5 (keyword:factor)
[anime:cartoon:0.5] //prompt scheduling [keyword1:keyword2:factor] switches tag at 50%
embedding:Cool_Embedding
(embedding:Cool_Embedding:1.2) //change weight (same as for normal tags)
<lora:Cool_LoRA> //unspecified LoRA weight (default 1.0)
<lora:Cool_LoRA:0.75> //specified LoRA weight 0.75
<lora:Cool_LoRA.safetensors:0.75> //also possible to include the file extension
__coolWildcard__ //use wildcard
__other/otherWildcard__ //wildcard in a sub folder
2. HiRes Fix
The HiRes Fix performs an initial, controlled upscale of the generated image (base) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition. Enable/Disable the HiRes Fix in the Fast Groups Bypasser node.
3. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next. The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw). The bbox threshold is the confidence level the detection model must have before it acts.
HandDetailer➔BodyDetailer➔NSFWDetailer➔FaceDetailer➔EyesDetailer
You might also wanna change the detailing prompt by expanding the Edit DetailerPipe node e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in the Fast Groups Bypasser node.
4. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation. Enable/Disable the Color Match in the Fast Groups Bypasser node.
5. Upscaler
Performs the final, large-scale upscaling with the help of an controlnet after all detailing passes are complete, resulting in a high-resolution final image.
VpredV14

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable HiRes) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.
Seed (rgthree): A master seed control for all generation steps. (-1 means random)
Ckpt Select: Select the main SDXL model used for generation.
Width & Height: The base resolution of the generated images.
Recommended Values: [1:1] 1024x1024, [3:4] 896x1152, [5:8] 832x1216, [9:16] 768x1344, [9:21] 640x1536
Steps Base: The steps used for the base image sampler.
Steps Hires: The steps used for the hires sampler.
CFG Value: Guidance setting of how closely the generation follows your prompt.
POSITIVE & NEGATIVE: This where you write your text prompt. You can use certain syntax to manually include embeddings, LoRAs and wildcards. However you can also click the "Click to add LoRA" and "Click to add Wildcard" at the bottom of the node to choose from a list of the available ones.
anime //normal tag
(anime) //equals to a weight of 1.1
((anime)) //equals to a weight of 1.21
(anime:0.5) //equals to a weight of 0.5 (keyword:factor)
[anime:cartoon:0.5] //prompt scheduling [keyword1:keyword2:factor] switches tag at 50%
embedding:Cool_Embedding
(embedding:Cool_Embedding:1.2) //change weight (same as for normal tags)
<lora:Cool_LoRA> //unspecified LoRA weight (default 1.0)
<lora:Cool_LoRA:0.75> //specified LoRA weight 0.75
<lora:Cool_LoRA.safetensors:0.75> //also possible to include the file extension
__coolWildcard__ //use wildcard
__other/otherWildcard__ //wildcard in a sub folder
2. HiRes
The HiRes performs an initial, controlled upscale of the generated image (base) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition. Enable/Disable the HiRes in the Fast Groups Bypasser node.
3. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next. The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw). The bbox threshold is the confidence level the detection model must have before it acts.
HandDetailer➔BodyDetailer➔NSFWDetailer➔FaceDetailer➔EyesDetailer
You might also wanna change the detailing prompt by expanding the Edit DetailerPipe node e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in the Fast Groups Bypasser node.
Upscaler_V14

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable Face ADetailer) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.
Seed (rgthree): A master seed control for all generation steps. (-1 means random)
Ckpt Select: Select the main SDXL model used for generation.
Scheduler & Sampler: Select the main scheduler and sampler for the generation process.
CFG Value: Guidance setting of how closely the generation follows your prompt.
2. Upscaler
Performs a upscaling process with the help of an controlnet, resulting in a high-resolution image.
3. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next. The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw). The bbox threshold is the confidence level the detection model must have before it acts.
HandDetailer➔BodyDetailer➔NSFWDetailer➔FaceDetailer➔EyesDetailer
You might also wanna change the detailing prompt by expanding the Edit DetailerPipe node e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in the Fast Groups Bypasser node.
4. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation. Enable/Disable the Color Match in the Fast Groups Bypasser node.
FAQ❔
Q1: How do I install all the required Custom Nodes?
A: The easiest way is to use the ComfyUI-Manager. After installing the Manager, you can use its Install Missing Custom Nodes feature, which will automatically find and install most of the nodes required by these workflows.
Q2: A model download link is broken. What should I do?
A: If a Hugging Face or Civitai link is down, try searching for the model filename directly on the respective sites (e.g., search for "4x_foolhardy_Remacri.pth" on the Hugging Face Hub). There are often alternative links provided.
Q3: How do I use a different LoRA?
A: In the POSITIVE or NEGATIVE prompt nodes, you can either manually type <lora:YourLoraName.safetensors:1.0> or, more easily, click the Click to add LoRA text at the bottom of the node. This will open a list of all your installed LoRAs, and you can click to add one with the correct syntax.
Q4: What are wildcards and how do I use them?
A: Wildcards are files that contain lists of words or phrases. When you use a wildcard in your prompt (e.g., haircolor), the workflow randomly selects one line from the corresponding haircolor.txt file for each generation. This is a powerful way to create a lot of variation automatically.
Installation: Place your wildcard .txt files in the ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards folder. You can create subdirectories for organization.
Usage: In the prompt node, type the filename surrounded by double underscores. You can also use the "Click to add Wildcard" helper at the bottom of the prompt node.
Q5: The Detailer nodes have Denoise and bbox threshold settings. What do they do?
A: Denoise: It controls how much the detailer can change the detected area. A low value (e.g., 0.2) makes subtle fixes, while a high value (e.g., 0.5) gives the model more freedom to redraw the area completely. Start low and increase if the details aren't fixed.
Bbox Threshold: This is the model's confidence score. A value of 0.3 means the model will only act if it's at least 30% sure it has correctly identified a hand, face, etc. If the detailer isn't activating, you can try lowering this value slightly.
🔽DOWNLOAD🔽
https://civitai.com/models/1386234/comfyui-image-workflows
