š½DOWNLOADš½
https://civitai.com/models/1386234/comfyui-image-workflows
Overview
The archive contains the following workflows:
T2I_V19:
is for creating new images with every possible feature (pose control, style transfer, refiner, etc.).BasicT2I_V19:
is a streamlined text-to-image experience without the advanced IP-Adapter, OpenPose, or Refiner steps, making it faster and simpler to use.Upscaler_V19:
is not for creating new images, but for improving and enlarging images you already have.VPred_V19:
is specifically for models that require v-prediction sampling, which is a different way the model interprets noise during generation.Detailer_V19:
is not for creating new images, but for improving ones you already have.
Requirements
Most of the requirements can be downloaded directly in the ComfyUI Manager.
š„T2I_V19
šØBasicT2I_V19
š©Upscaler_V19
š¦VPred_V19
šŖDetailer_V19
Custom Nodes:
š„šØš©š¦šŖ ComfyUI-Manager (by Comfy-Org)
https://github.com/Comfy-Org/ComfyUI-Managerš„šØš©š¦šŖ ComfyUI-Impact-Pack (by ltdrdata)
https://github.com/ltdrdata/ComfyUI-Impact-Packš„šØš©š¦ šŖComfyUI-Impact-Subpack (by ltdrdata)
https://github.com/ltdrdata/ComfyUI-Impact-Subpackš„šØš©š¦šŖ ComfyUI-Easy-Use (by yolain)
https://github.com/yolain/ComfyUI-Easy-Useš„šØš©š¦ šŖrgthree-comfy (by rgthree)
https://github.com/rgthree/rgthree-comfyš„šØš©š¦šŖ ComfyUI-Image-Saver (by alexopus)
https://github.com/alexopus/ComfyUI-Image-Saverš„šØš©šŖ ComfyUI Essentials (by cubiq)
https://github.com/cubiq/ComfyUI_essentialsš„šØš© ComfyUI_UltimateSDUpscale (by ssitu)
https://github.com/ssitu/ComfyUI_UltimateSDUpscaleš„š© pysssss Custom Scripts (by pythongosssss)
https://github.com/pythongosssss/ComfyUI-Custom-Scriptsš„šŖ ComfyUI-FBCNN (by Miosp)
https://github.com/Miosp/ComfyUI-FBCNNš„šŖ ComfyUI's ControlNet Auxiliary Preprocessors (by Fannovel16)
https://github.com/Fannovel16/comfyui_controlnet_auxš„ ComfyUI_IPAdapter_plus (by cubiq)
https://github.com/cubiq/ComfyUI_IPAdapter_plusš„ ComfyUI_Comfyroll_CustomNodes (by Suzie1)
https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodesš„ z-tipo-extension (by KohakuBlueleaf)
https://github.com/KohakuBlueleaf/z-tipo-extensionš„ ComfyUI-ppm (by pamparamm)
https://github.com/pamparamm/ComfyUI-ppmš© ComfyUI-WD14-Tagger (by pythongosssss)
https://github.com/pythongosssss/ComfyUI-WD14-Tagger
Models Checklist:
You only need to download the models for features you are planning to use e.g. if you don't use the refiner you dont need to download the sd_xl_refiner_1.0 model.
Checkpoints:Ā
š„šØš©š¦šŖĀ AnyĀ SDXL/Pony/Illustrious/NoobAI modelĀ š/ComfyUI/models/checkpoints
https://civitai.com/models/1203050/fabricated-xl
https://civitai.com/models/827184/wai-nsfw-illustrious-sdxl
https://civitai.com/models/989367/wai-shuffle-noob
https://civitai.com/models/140272/hassaku-xl-illustrious
š„Ā SDXL Refiner modelĀ š/ComfyUI/models/checkpoints/SDXL
https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0.safetensorsVAE:Ā
š„šØš¦šŖĀ AnyĀ VAE modelĀ š/ComfyUI/models/vae/SDXL
https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensorsControlNet:
š„šØš©šŖĀ control-lora-canny-rank256.safetensorsĀ š/ComfyUI/models/controlnet/SDXL
https://huggingface.co/stabilityai/control-lora/blob/main/control-LoRAs-rank256/control-lora-canny-rank256.safetensors
control-lora-depth-rank256.safetensorsĀ š/ComfyUI/models/controlnet/SDXL
https://huggingface.co/stabilityai/control-lora/blob/main/control-LoRAs-rank256/control-lora-depth-rank256.safetensors
š„Ā noobaiXLControlnet_openposeModel.safetensors š/ComfyUI/models/controlnet
https://civitai.com/models/962537?modelVersionId=1077649 (for NoobAI or Illustrious)
or
š„Ā OpenPoseXL2.safetensors š/ComfyUI/models/controlnet/SDXL
https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/blob/main/OpenPoseXL2.safetensors (for SDXL)
šŖ noobaiInpainting_v10 š/ComfyUI/models/controlnet/
https://civitai.com/models/1376234/noobai-inpainting-controlnetIP-Adapter:
š„Ā noobIPAMARK1_mark1.safetensorsĀ š/ComfyUI/models/ipadapter
https://civitai.com/models/1000401/noob-ipa-mark1 (for NoobAI or Illustrious)
or
š„Ā ip-adapter-plus_sdxl_vit-h.safetensorsĀ š/ComfyUI/models/ipadapter
https://huggingface.co/h94/IP-Adapter/blob/main/sdxl_models/ip-adapter-plus_sdxl_vit-h.safetensors (for SDXL)
š„Ā ip-adapter-faceid-plusv2_sdxl.bin š/ComfyUI/models/ipadapter
https://huggingface.co/h94/IP-Adapter-FaceID/blob/main/ip-adapter-faceid-plusv2_sdxl.binCLIP Vision:
š„Ā CLIP-ViT-H-14-laion2B-s32B-b79K.safetensorsĀ š/ComfyUI/models/clip_vision
https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/tree/main
š„Ā clip_vision_g.safetensorsĀ š/ComfyUI/models/clip_vision
https://huggingface.co/stabilityai/control-lora/blob/main/revision/clip_vision_g.safetensorsUpscale Models:
š„šØš©šŖĀ 4x_foolhardy_Remacri.pth (or any other 4x ESRGAN model) š/ComfyUI/models/upscale_models
https://huggingface.co/FacehugmanIII/4x_foolhardy_RemacriDetectors (YOLO/SEG): Place detector models in ComfyUI/models/ultralytics/bbox or /segm. This workflow requires detectors for:
š„šØš©š¦šŖĀ Hands (hand_yolov9c.pt)Ā š/ComfyUI/models/ultralytics/bbox
https://huggingface.co/Bingsu/adetailer/blob/main/hand_yolov9c.ptš„šØš©š¦šŖĀ Faces (face_yolov9c.pt)Ā š/ComfyUI/models/ultralytics/bbox
https://huggingface.co/Bingsu/adetailer/blob/main/face_yolov9c.ptš„šØš©š¦šŖĀ Eyes (Eyeful_v2-Paired.pt)Ā š/ComfyUI/models/ultralytics/bbox
https://civitai.com/models/178518/eyeful-or-robust-eye-detection-for-adetailer-comfyuiš„šØš©š¦šŖĀ NSFW (ntd11_anime_nsfw_segm_v4_all.pt)Ā š/ComfyUI/models/ultralytics/segm
https://civitai.com/models/1313556/anime-nsfw-detectionadetailer-all-in-onešŖĀ Adetailer for Text / Speech bubbles / WatermarksĀ š/ComfyUI/models/ultralytics/segm
https://civitai.com/models/753616/adetailer-for-text-speech-bubbles-watermarks
Recommendations

Enable Autocompletion in the settings tab under pysssss. It's also recommended to press Manage Custom Words, load the default tag list and press save.

Also disable Link Visibility for better viewing clearity since it can get quite clustered.
Prompt Syntax
anime //normal tag
(anime) //equals to a weight of 1.1
((anime)) //equals to a weight of 1.21
(anime:0.5) //equals to a weight of 0.5 (keyword:factor)
[anime:cartoon:0.5] //prompt scheduling [keyword1:keyword2:factor] switches tag at 50%
embedding:Cool_Embedding
(embedding:Cool_Embedding:1.2) //change weight (same as for normal tags)
<lora:Cool_LoRA> //unspecified LoRA weight (default 1.0)
<lora:Cool_LoRA:0.75> //specified LoRA weight 0.75
<lora:Cool_LoRA.safetensors:0.75> //also possible to include the file extension
__coolWildcard__ //use wildcard
__other/otherWildcard__ //wildcard in a sub folder
Image Resolutions
Recommended Values:Ā [1:1] 1024x1024, [3:4] 896x1152, [5:8] 832x1216, [9:16] 768x1344, [9:21] 640x1536, [1:1] 1536x1536, [2:3] 1024x1536, [13:24] 832x1536
T2I_V19

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable HiResFix) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.

Seed (rgthree): A master seed control for all generation steps. (-1 means random)

Ckpt Select:Ā Select the main SDXL model used for generation in the Ckpt Names node.

Width & Height:Ā The base resolution of the generated images.

Scheduler & Sampler:Ā Select the main scheduler and sampler for the generation process.Ā They act as the engine that guides the noise removal process. Different samplers can produce slightly different results in terms of style and convergence speed.

Steps Base:Ā The steps used for the base image sampler.

Steps Refiner:Ā The steps used for the refinement process if enabled in the Fast Groups Bypasser node.

Batch Size:Ā determines how many images are generated in a single run when you press "Queue Prompt

CLIPSetLastLayer:Ā A setting that tells the model to ignore the final layers of the text interpretation model (CLIP). This can sometimes lead to more aesthetic or creative results, especially with anime-style models.

CFG Value:Ā A setting that controls how strongly the AI should adhere to your text prompt. Higher values mean stricter adherence.

POSITIVE & NEGATIVE:Ā This where you write your text prompt. You can use certain syntax to manually include embeddings, LoRAs and wildcards. However you can also click the "Click to add LoRA" andĀ "Click to add Wildcard" at the bottom of the node to choose from a list of the available ones.

2. Refiner
The Refiner performs a second diffusion pass on the image using a dedicated refiner model. This step doesn't change the composition but enhances fine details, textures, and overall image sharpness.
Enable/Disable the Refiner in theĀ Fast Groups Bypasser node and adjust the number of steps in the StepsĀ RefinerĀ node.

3. HiRes Fix
The HiRes Fix performs an initial, controlled upscale of the generated image (base+refiner) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition.
Enable/Disable the HiRes Fix in theĀ Fast Groups Bypasser node.

4. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next.Ā The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw).Ā The bbox threshold is the confidence level the detection model must have before it acts.Ā
HandDetailerāBodyDetailerāNSFWDetailerāFaceDetailerāEyesDetailer
You might also wanna change the detailing prompt e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in theĀ Fast Groups Bypasser node.

5. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation.Ā
Enable/Disable the Color Match in theĀ Fast Groups Bypasser node.

6. Upscaler
Performs the final, large-scale upscaling with the help of an controlnet after all detailing passes are complete, resulting in a high-resolution final image.
Enable/Disable the Upscaler in theĀ Fast Groups Bypasser node.

7. OpenPose
Applies precise and complex character poses using a reference image, overriding the natural posing the model might otherwise choose. Find the LoadImage node and upload the image you want to use as the structural basis for your generation. If the image is already preprocessed for set Use Img Preprocessor? node to off.Ā (Resource for poses:Ā https://github.com/a-lgil/pose-depot)
Enable/Disable OpenPose in theĀ Fast Groups Bypasser node.

8. Any ControlNet
Flexible system designed to let you apply any type of ControlNet to your image generation through a few simple dropdown menus. Find the LoadImage node and upload the image you want to use as the structural basis for your generation. If the image is already preprocessed for set Use Img Preprocessor? node to off. (Resource for poses:Ā https://github.com/a-lgil/pose-depot)
Go to the ControlNetPreprocessorSelector node and click the dropdown menu and choose the type of control you want to apply.
Go to the ControlNetLoader node and click the dropdown and select the model file that corresponds to your chosen preprocessor.
Enable/Disable Any ControlNet in theĀ Fast Groups Bypasser node.

9. IP-Adapter Style & Composition
Transfers the overall aesthetic including color palette, lighting, mood, and compositional elements from a reference image to the generated image.
Enable/Disable Style & Compostition in theĀ Fast Groups Bypasser node.

10. IP-Adapter FaceID
Accurately transfers the facial identity from a reference portrait to the generated character. This is more precise than using a standard IP-Adapter for faces.
Enable/Disable FaceID in theĀ Fast Groups Bypasser node.

11. Clip Vision
Allows the model to "see" and understand an image in a way that's similar to how it understands text.
Enable/Disable Clip Vision in theĀ Fast Groups Bypasser node.

12. Compression Removal
This is a JPEG artifact/compression removal tool.
Enable/Disable Compression Removal in theĀ Fast Groups Bypasser node.

13. Seperate VAE
This group acts as a switch, allowing you to choose between using the VAE that's built into your main model (.safetensors checkpoint) or using a standalone, high-quality VAE file.Ā This component responsible for translating the image from the AI's internal "latent space" into a visible image (pixels).
Enable/Disable Seperate VAE in theĀ Fast Groups Bypasser node.

14. Regional Prompting
Regional prompting allows for detailed control over image generation by applying different text prompts to specific areas of an image.
The process begins by defining the different regions of your image using a simple, color-coded image. The Load Image node is used to import a image with three distinct colors: red, green, and blue (If you open the image in the MaskEditor and select the paint brush, you can also adjust the areas manually). Each color corresponds to a specific area that will receive its own unique prompt. The POSITIVE & NEGATIVE Nodes will act as a global prompt. I would also recommended applying a ControlNet for better control of the composition.
Enable/Disable Regional Prompting in theĀ Fast Groups Bypasser node.

15. Using VPred Model

Used when loading a VPred checkpoint for this workflow, make sure to also select sampler/schedulers accordingly.
Enable/Disable VPred Model? in theĀ Fast Groups Bypasser node.
16. Background Remover

This node isolates the main subject of an image by removing its background, which is useful for creating transparent PNGs or compositing subjects onto new backdrops. The rem_mode dropdown allows you to select from different background removal models, such as BEN2. You can also choose to add a solid color background and refine the foreground edges for a cleaner cutout. Works better with images that have sharp or well-defined edges.
Enable/Disable Remove Background in theĀ Fast Groups Bypasser node.
BasicT2I_V19

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable HiResFix) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.

Seed (rgthree): A master seed control for all generation steps. (-1 means random)

Ckpt Select:Ā Select the main SDXL model used for generation.

Width & Height:Ā The base resolution of the generated images.

Scheduler & Sampler:Ā Select the main scheduler and sampler for the generation process.

Steps Base:Ā The steps used for the base image sampler.

Batch Size:Ā determines how many images are generated in a single run when you press "Queue Prompt

CLIPSetLastLayer:Ā fine-tunes how the AI model interprets your text prompt.

CFG Value:Ā Guidance setting of how closely the generation follows your prompt.

POSITIVE & NEGATIVE:Ā This where you write your text prompt. You can use certain syntax to manually include embeddings, LoRAs and wildcards. However you can also click the "Click to add LoRA" andĀ "Click to add Wildcard" at the bottom of the node to choose from a list of the available ones.
2. HiRes Fix
The HiRes Fix performs an initial, controlled upscale of the generated image (base) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition.
Enable/Disable the HiRes Fix in theĀ Fast Groups Bypasser node.

3. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next.Ā The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw).Ā The bbox threshold is the confidence level the detection model must have before it acts.Ā
HandDetailerāBodyDetailerāNSFWDetailerāFaceDetailerāEyesDetailer
You might also wanna change the detailing prompt e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in theĀ Fast Groups Bypasser node.

4. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation.Ā
Enable/Disable the Color Match in theĀ Fast Groups Bypasser node.

5. Upscaler
Performs the final, large-scale upscaling with the help of an controlnet after all detailing passes are complete, resulting in a high-resolution final image.

6. Seperate VAE
This group acts as a switch, allowing you to choose between using the VAE that's built into your main model (.safetensors checkpoint) or using a standalone, high-quality VAE file.
Enable/Disable it in theĀ Fast Groups Bypasser node.

VpredV19

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable HiRes) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.

Seed (rgthree): A master seed control for all generation steps. (-1 means random)

Ckpt Select:Ā Select the main SDXL model used for generation.

Width & Height:Ā The base resolution of the generated images.

Scheduler & Sampler:Ā Select the main scheduler and sampler for the generation process.

Steps Base:Ā The steps used for the base image sampler.
Steps Hires:Ā The steps used for the hires sampler.
Batch Size:Ā determines how many images are generated in a single run when you press "Queue Prompt

CLIPSetLastLayer:Ā fine-tunes how the AI model interprets your text prompt.

CFG Value:Ā Guidance setting of how closely the generation follows your prompt.
POSITIVE & NEGATIVE:Ā This where you write your text prompt. You can use certain syntax to manually include embeddings, LoRAs and wildcards. However you can also click the "Click to add LoRA" andĀ "Click to add Wildcard" at the bottom of the node to choose from a list of the available ones.
2. HiRes
The HiRes performs an initial, controlled upscale of the generated image (base) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition. Enable/Disable the HiRes in theĀ Fast Groups Bypasser node.

3. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next.Ā The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw).Ā The bbox threshold is the confidence level the detection model must have before it acts.Ā
HandDetailerāBodyDetailerāNSFWDetailerāFaceDetailerāEyesDetailer
You might also wanna change the detailing prompt e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in theĀ Fast Groups Bypasser node.

4. Seperate VAE
This group acts as a switch, allowing you to choose between using the VAE that's built into your main model (.safetensors checkpoint) or using a standalone, high-quality VAE file.
Enable/Disable it in theĀ Fast Groups Bypasser node.

Upscaler_V19

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable Face ADetailer) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.

Seed (rgthree): A master seed control for all generation steps. (-1 means random)

Ckpt Select:Ā Select the main SDXL model used for generation.

Scheduler & Sampler:Ā Select the main scheduler and sampler for the generation process.

CLIPSetLastLayer:Ā fine-tunes how the AI model interprets your text prompt.

CFG Value:Ā Guidance setting of how closely the generation follows your prompt.

2. Upscaler
Performs a upscaling process with the help of an controlnet, resulting in a high-resolution image.

3. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next.Ā The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw).Ā The bbox threshold is the confidence level the detection model must have before it acts.Ā
HandDetailerāBodyDetailerāNSFWDetailerāFaceDetailerāEyesDetailer
You might also wanna change the detailing prompt e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in theĀ Fast Groups Bypasser node.

4. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation.Ā Enable/Disable the Color Match in theĀ Fast Groups Bypasser node.

Detailer_V19

1. Control Center
The purpose of the Control Center is to centralize the most common settings and provide master on/off switches for the workflow's major features, making it easy to manage without navigating through the entire workflow.
Fast Groups Bypasser (rgthree): This is the core of the control system. Each toggle (e.g., Enable Face ADetailer) is linked to a group of nodes. Setting a toggle to "no" effectively removes that entire group from the generation process.

Seed (rgthree): A master seed control for all generation steps. (-1 means random)

Ckpt Select:Ā Select the main SDXL model used for generation.

Scheduler & Sampler:Ā Select the main scheduler and sampler for the generation process.

CLIPSetLastLayer:Ā fine-tunes how the AI model interprets your text prompt.

CFG Value:Ā Guidance setting of how closely the generation follows your prompt.

2. HiRes Fix
The HiRes Fix performs an initial, controlled upscale of the generated image (base) before the more intensive detailing passes. This adds resolution and detail without straying from the original composition.
Enable/Disable the HiRes Fix in theĀ Fast Groups Bypasser node.

3. Mask ADetailer

Uses the specified mask to redraw parts of the image with detail. To edit the mask right-click the loaded image and select Open in MaskEditor.

Enable/Disable the Mask Detailer in theĀ Fast Groups Bypasser node.
4. Detailer Chain/Pipeline
Uses specialized object detection models to find and redraw specific parts of the image with extreme detail. This is the core of the workflow's refinement process, tackling common problem areas sequentially. The output of one detailer becomes the input for the next.Ā The Denoise strength controls how much freedom the model has to change the detected area. It's a value from 0.0 (no change) to 1.0 (total redraw).Ā The bbox threshold is the confidence level the detection model must have before it acts.Ā
HandDetailerāBodyDetailerāNSFWDetailerāFaceDetailerāEyesDetailer
You might also wanna change the detailing prompt e.g. {hand, perfect hands| hand, good correct hands} into something different that aligns better with your goal in mind.
Enable/Disable the Detailers in theĀ Fast Groups Bypasser node.

5. Color Match
Corrects any color shifts that may have occurred during the numerous detailing and upscaling passes. This ensures the final image retains the intended color palette of the initial generation.Ā Enable/Disable the Color Match in theĀ Fast Groups Bypasser node.

6. Compression Removal
This is a JPEG artifact/compression removal tool.
Enable/Disable Compression Removal in theĀ Fast Groups Bypasser node.

7. Seperate VAE
This group acts as a switch, allowing you to choose between using the VAE that's built into your main model (.safetensors checkpoint) or using a standalone, high-quality VAE file.
Enable/Disable Seperate VAE in theĀ Fast Groups Bypasser node.

8. Inpaint

Inpainting is used to repair, remove, or replace a specific part of an image. By providing a mask, you can tell the model exactly which area to regenerate. To use it, right-click the loaded image and select "Open in MaskEditor" to paint over the area you want to change. The model will then use your text prompt to fill in the masked section, allowing you to remove unwanted objects or alter details like clothing and facial expressions.
Enable/Disable Inpaint in theĀ Fast Groups Bypasser node.
9. Outpaint

Outpainting expands the canvas of an image, generating new content beyond its original borders to create a larger scene. This process is useful for extending a scene, adjusting the composition, or adding new elements. You use a node to add padding around the original image, defining the areas to be filled. The model then generates new imagery in these extended areas based on your prompt, seamlessly blending it with the existing picture.
Enable/Disable Outpaint in theĀ Fast Groups Bypasser node.
8. Watermark Remover

This group is designed to automatically detect and remove watermarks from an image. It uses a specialized model to intelligently detect watermarks and inpaints them to effectively erase them, which can be useful for cleaning up images.
Enable/Disable Remove Background in theĀ Fast Groups Bypasser node.
FAQā
Q1: How do I install all the required Custom Nodes?
A: The easiest way is to use the ComfyUI-Manager. After installing the Manager, you can use its Install Missing Custom Nodes feature, which will automatically find and install most of the nodes required by these workflows.
Q2: A model download link is broken. What should I do?
A: If a Hugging Face or Civitai link is down, try searching for the model filename directly on the respective sites (e.g., search for "4x_foolhardy_Remacri.pth" on the Hugging Face Hub). There are often alternative links provided.
Q3: How do I use a different LoRA?
A: In the POSITIVE or NEGATIVE prompt nodes, you can either manually type <lora:YourLoraName.safetensors:1.0> or, more easily, click the Click to add LoRA text at the bottom of the node. This will open a list of all your installed LoRAs, and you can click to add one with the correct syntax.
Q4: What are wildcards and how do I use them?
A: Wildcards are files that contain lists of words or phrases. When you use a wildcard in your prompt (e.g., haircolor), the workflow randomly selects one line from the corresponding haircolor.txt file for each generation. This is a powerful way to create a lot of variation automatically.
Installation: Place your wildcard .txt files in the ComfyUI/custom_nodes/ComfyUI-Impact-Pack/wildcards folder. You can create subdirectories for organization.
Usage: In the prompt node, type the filename surrounded by double underscores. You can also use the "Click to add Wildcard" helper at the bottom of the prompt node.
Q5: The Detailer nodes have Denoise and bbox threshold settings. What do they do?
A: Denoise: It controls how much the detailer can change the detected area. A low value (e.g., 0.2) makes subtle fixes, while a high value (e.g., 0.5) gives the model more freedom to redraw the area completely. Start low and increase if the details aren't fixed.Ā
Bbox Threshold: This is the model's confidence score. A value of 0.3 means the model will only act if it's at least 30% sure it has correctly identified a hand, face, etc. If the detailer isn't activating, you can try lowering this value slightly.
š½DOWNLOADš½
https://civitai.com/models/1386234/comfyui-image-workflows
