Sign In

(Not so?) Simple-ish Illustrious Workflow

451

11.1k

304

Type

Workflows

Stats

937

0

Reviews

Published

Jun 6, 2025

Base Model

Illustrious

Hash

AutoV2
F23348F989
Comfy Workflow Badge
Gladas's Avatar

Gladas

I recommend the "Actually Simple" version if you don't want to play with extra settings and want to "keep it simple". Sometimes simple is best! (really)

  • This is the best option if you don't feel comfortable installing a ton of extra stuff and/or have not updated your ComfyUI and any associated python dependencies in a long time.

  • It uses very little in the way of custom nodes, so there are less things that can break.

  • It should be the second version listed above the sample images

If you don't mind all the extra options, then please see below for any updates related to the other workflow versions.

The best practice if you don't have a 2nd ComfyUI install for testing/breaking is to create your own workflows from scratch. While I would like for my workflow to work for everyone, it is not the reality of ComfyUI. Maybe someday, there will be an onsite workflow option.

Updating anything in ComfyUI comes with the risk of breaking something. (Like how one of the eye detection models I use no longer works. There is a workaround, but it involves bypassing a security feature, which I don't recommend, via the model-whitelist.txt file usually located in the ComfyUI/user/default/ComfyUI_Impact-Subpack folder.)

Note: I use the "portable" version of ComfyUI instead of the Desktop App version. If you are using the app version, you may have to jump through more hoops to get things working.

v11b changes:

Ultimate SD Upscale:

  • Added nodes to sync the settings for mask blur and tile padding with seam fix mask blur and seam fix padding. This should help with any seams issues when using seam fix mode.

  • This is effectively just making it easier to adjust the settings. Any issues native to the node would have to be brought to the project owner of Ultimate SD Upscale.

  • Additionally tested with a lora enabled to ensure the behavior was the same.

Tweaked the settings on the Image Filter Adjustments nodes:

  • Adjusted the edge enhance settings to 0. You can manually adjust these to your preferred flavor.

Dropped the Math node for CFG on USDU and the Detailer nodes.

  • CFG will now be the same across the whole workflow by default.

Default settings will be with the extras features disabled for ease of use.

  • Dynamic Thresholding, FreeU_V2, Detail Daemon, etc.

v11a changes:

Moved the Global Seed node closer to the 1st KSampler (SamplerCustomAdvanced) node.

Added a few more notes throughout the workflow to hopefully help anyone who is not familiar with how some nodes work.

Tweaked some of the settings on the various Image Filter Adjustments nodes and left the majority of them expanded in case people want to adjust them.

Added single ControlNet groups for both USDU groups.

  • These utilize the ControlNet Union model which can be downloaded via the Manager.

Removed the nodes that added text for the before & after results in the Image Comparer nodes.

  • These were behaving in a way that would cause the random image results to not populate in the Comparer nodes until the workflow was fully finished.

  • Kind of a shame, was a nice QoL thing for me.

Initial example images were generated using:

Sampler: Euler Ancestral

Scheduler: Beta

Steps: 28

CFG: 6

Upscaled via 2 USDU Groups

Upscale Model: RealESRGAN_x4Plus Anime 6B (link to OpenModelDB)

Steps: 10

Denoise: 0.2

Detailers for Face and Eyes

3 cycles each

Face Detection Model: Anzhc Face seg 1024 v2 y8n.pt (link to huggingface page)

Eye Detection Model: Eyeful | Robust eye detection for Adetailer / ComfyUI - v2 Individual version. (civitai page link)

These will be the settings on the saved workflow file.

v11:

Rebuilt the workflow from scratch (again). This version isn't necessarily better than v10b, but is faster and uses less nodes.

This workflow took some inspiration from one of Olivio Sarikas' workflows as mentioned in this video. Namely, the Image Filter Adjustments node from was-node-suite-comfyui.

  • I made heavy use of this node throughout the workflow. If you find your image results to be way too sharp, these nodes can be bypassed without issue (confirmed).

  • This has effectively made the Detailer nodes pointless for me, but I have kept them in the workflow just in case.

Dropped Detail Daemon Sampler from the workflow.

  • Multiple Sigmas (stateless) is still there and can be bypassed with no issues.

Fake Iterative Upscale group is not present in this version. (Yay for faster speed!)

I stopped using comfyui-art-venture with this version of the workflow.

  • After updating it to the latest version, I kept having issues with it not loading after restarting comfyui, even after a clean install of the custom nodes.

  • I switched over to ComfyUI-post-processing-nodes which has the same Color Correct node.

One of the reasons this workflow came about was that I was looking for alternative upscale methods. I looked into SUPIR and APISR. USDU, our old friend, is still in the workflow.

  • SUPIR was a total flop for me, probably because I don't really do realistic images.

  • APISR is aimed at anime upscaling, but turns out the upscale models it uses works just fine without special nodes. (At least 4x_APISR_DAT_GAN_generator does).

Overall, the generation speed is faster (for me it takes about 3 minutes from start to finish on a 5060 16GB) and I am happy with the results.

Custom Nodes used in this workflow as of v11:

ComfyUI-Manager

ComfyUI-Impact-Pack

ComfyUI-Impact-Subpack (the Ultralytics Provider node was moved to here)

ComfyUI-Crystools

ComfyUI_Comfyroll_CustomNodes

ComfyUI-Custom-Scripts

ComfyUI-Detail-Daemon

comfyui-lama-remover

rgthree-comfy

ComfyUI_UltimateSDUpscale

ComfyUI-KJNodes

was-node-suite-comfyui

ComfyUI-Image-Saver

ComfyMath

ComfyUI-Inspire-Pack

ComfyUI-ppm

ComfyUI-Detail-Daemon

ComfyUI-post-processing-nodes

If I missed anything or listed something not on the workflow any longer, then my bad.

You should get a warning of what is missing when you load up the workflow. Impact-Pack seems to have to be manually git pull-ed for updates. (At least for me).

If you disconnect any of the noodles or bypass something you should not have, then something will probably break and result in errors. Try loading the original workflow JSON or drop your last successfully generated image if you run into this and don't want to troubleshoot.

Feel free to remove and add to the workflow to fit what you want. If my workflow helped you in any way, then great.

Settings will need to be adjusted to fit your preferences unless you are trying to generate images like mine. The default settings are not meant for speed.

This has been tested on the models mentioned in my "Suggested Resources" below. YMMV. Try playing with the settings/prompts to find your happy place. Current settings are to my tastes. Adjust to your tastes/preferences accordingly!

Why do I use the Color Correct?

  • Upscaling with KSampler/Ultimate SD Upscale strips/alters the color from the original image (at least for me). Plus I just like to give the finished image some extra contrast.

Watermark Removal

Why do I have this in the workflow?

  • While rare, they do still happen and I don't like having to give up on a good image because of a watermark ruining it for me.

Watermark Removal in action:

Altering any of the settings in the Watermark portion of the workflow will probably break the watermark removal. All that should be changed there is:

  • Detection Threshold (higher = less detection, lower = more aggressive detection)

  • Watermark Detection Model (use whichever one you prefer)

  • Text in the BBOX Detection node.

  • Steps, scheduler, denoise on the Watermark Remover node can be adjusted.

  • on v10a I have had to drop the denoise down to 0.01 in some cases.

  • gaussblur radius can be adjusted up or down on the Big Iama Remover(IMG) node.

  • Anything else, I do not recommend messing with in the watermark portion of the workflow. I didn’t come up with it and cannot advise you on what all the buttons, numbers, and settings will do. Change them at your own risk.

Upscale Model:

You should be able to use whatever upscale model you like best. I primarily use ModernSpanimationV1 now.

FaceDetailer Models:

If I recall, Impact Pack includes the needed models to get you started, but if you want something else, you can find more by using ComfyUI Manager's "Model Manager" option. The two types of models needed will be "Ultralytics" and "sam".

The face model I use is no longer on civitai. Looks like the person got banned (last I checked).

VAE Model:

I usually use the normal SDXL VAE or whatever is baked into the checkpoint models.

Asking for help:

I'm just someone who uses ComfyUI and I am not a developer. If you have technical questions, I probably can't help you. I make heavy use of Google when I don't know why something breaks or want to know what something does. However, I will try to help you within the best of my abilities for any non-technical questions.

  • In the cases where you need/want help: please do not be vague

  • Provide links to screenshots (if possible).

  • Don't be a jerk. (I am not obligated to help you. This workflow is intended for my personal use, and I am sharing it freely).

v10b changes are minor:

Replaced all the Color Match nodes with Color Correct nodes.

Will be exploring other options for the upscale portion of the workflow.

v10a changes:

Disclaimer: this version is not aimed at generating images fast.

Prompt boxes for Detailer nodes have been added to the prompting sections of the workflow.

  • These can be left blank, but adding prompts describing what is being detailed can help the results.

Re-added some of the missing notes from v9f to this version.

Fixed the Color Correct output connection at the end of the workflow.

Added another SamplerCustomAdvanced node to the Fake Iterative Upscale group.

  • The solution for white edges appearing on the bottom and right sides of the images was to change the upscale settings to increments of ".25". As a result, I added this additional node so that this group (with default settings) will upscale 1.25>1.5>1.75>2.0.

Added Multiply Sigmas (stateless) nodes to each Detail Daemon group.

I tried to offset the loss of color due to upscaling using Color Match (not Color Correct) nodes throughout the workflow.

  • This is a known issue(?) when upscaling in pixel space versus latent space. (or so I have read).

  • This has yielded better (less buggy) results than just having it at the end of the workflow.

Changed the Image Comparer nodes to show the "after" result first.

v10 changes:

Your choice of upscale model will greatly impact the time this version of the workflow take to finish.

This version of the workflow adds 3 additional SamplerCustomAdvanced nodes to do something similar to Iterative Upscale.

  • The reason for doing it this way, is because the nodes for Iterative Upscaling do not support custom samplers without doing some extra work (which I am not willing to do).

  • Default settings for this are set to result in a 2x upscale going from 1.33>1.66>2.0 from the original image size.

  • This group of nodes can be bypassed and the workflow will be similar to how it was prior to this version. Alternatively, you can also bypass the USDU groups if you are satisfied with the results from this group.

Detail Daemon nodes are now individually assigned to each node that can use them.

Added an option for IMG2IMG at the start of the flow.

  • It will try and automatically calculate the closest starting SDXL resolution based on the image size.

Changed the bookmarks on the workflow.

  • They are now labelled and range from 1 to 9. rgthree-comfy must be installed for these to function.

Duplicate FreeU_V2 and Dynamic Thresholding nodes were removed.

Color Match was removed.

v9f changes:

Added Detail Daemon back to the workflow.

  • The settings I have them at are what has worked well for me.

  • Just like FreeU V2, it is not a one-size-fits-all solution.

  • This will be toggled off by default.

  • I have Detail Daemon nodes placed in their own groups and attached to the KSampler and both USDU nodes individually.

  • No funky switches this time. You can just use the Fast Groups Bypasser node to toggle them off or bypass them normally.

Notes regarding USDU (Ultimate SD Upscale)

  • Based on the FAQ from the original USDU project, they recommend using a denoise setting of 0.1 to 0.2 for upscaling and up to 0.35 for enhancing the image.

  • The FAQ also covers what the other settings can do. Just please be aware the original project was made for A1111, so some things may be different.

  • The ComfyUI version has some instructions, but points to the same FAQ.

  • For technical questions regarding USDU, I suggest reaching out to one of those two project pages.

The first few example images were generated with both FreeU V2 and Detail Daemon fully enabled across the workflow.

v9e changes:

Small adjustments. Did some more cleaning up of the noodles. The remaining group names should now be fully uncovered and readable.

Added an Upscale>Downscale group before USDU1.

  • This can help the output come out better, but YMMV.

  • The default settings will have it run the image through an upscale model and then downscale it to the original image size before feeding the image into USDU.

  • Alternatively, you could change the Upscale setting to 2 and change the USDU to 1. This would basically make it behave the same as the USDU (No Upscale) node. The output does come out different if you do it this way, but feel free to test it out yourself.

Added FreeU_V2 to the Dynamic Thresholding groups.

  • I don't recommend using FreeU unless you know what you're doing or are willing to learn about it on your own.

  • It can help, but it's not a one-size-fits-all solution for every model.

Added Concat Conditionings for the Positive Prompt.

  • From the ComfyUI wiki: Imagine that you are cooking a dish, "conditioning_to" is the basic recipe, and "conditioning_from" are some additional seasonings or condiments. The ConditioningConcat class is like a tool that helps you add these seasonings to the recipe, making your dish more colorful and rich.

  • The usual Positive Prompt on the ImpactWildcardEncode node will act as the "conditioning_to" and the text node below is will act as the "conditioning_from".

  • I tried using this with other Save Image nodes other than Image Saver, but they do not capture the full prompt. Just FYI in case you decide to swap out Image Saver for something else.

v9d changes:

Layout changes primarily in the beginning of the workflow.

  • Made heavy use of rgthree's reroute nodes to clean up some of the spaghetti.

Re-added ComfyUI-ppm back to the workflow after removing it in v10.

  • I think my issues with ppm stemmed from bad Dynamic Thresholding settings.

Adjusted the Dynamic Thresholding settings to be more in line with what the project page's wiki info.

If using Dynamic Thresholding (with how I have it set):

  • Set the CFG value to 2x the normal number

  • Set the minimum CFG to whatever lowest value you want, you'll need to experiment. I usually go like 1 to 2 lower than normal if the sampler allows for it.

Global Seed (Inspire) node controls the seed for all related nodes in the workflow now.

  • Note: if you load from queue history, this node will switch to fixed and stay that way until you change it to something else.

v10 is scrapped.

  • I never want to use Set/Get nodes ever again.

ControlNet and Detail Daemon were also removed.

Here are some screenshots showing before and after via Image Comparer nodes. Images shown were generated using the settings saved to the v9d workflow JSON. Not sure if it makes a difference, but I am using sage attention now.

"After" is on the left side of the image comparer nodes. (the one with the line down the middle).

Before and after Watermark portion of the workflow:

Before and after USDU1:

Before and after USDU2:

Before and after Color Correct:

Before and after comparison of the initial image and the final result:

v9c changes:

Fixed some of the context connections that were causing the wrong image to be sent to the end of the workflow when bypassing certain node groups.

  • This was mostly due to two context inputs on a context switch being swapped around.

  • Minor adjustments to where the context nodes are placed.

Some of the model names may look different on the workflow.

  • This is due to me having to reinstall comfy after breaking via updating update_comfyui_and_python_dependencies. Maybe I will have learned my lesson this time. (probably not)

v9b: guess I skipped this one.

v9a changes:

Minor adjustments to the left side of the workflow. It felt a little to cramped for me.

  • Brought back the token count node, just because I find it helpful on occasion.

Reconnected a couple nodes in the detailer section that no one probably uses anyways, but just for the sake of workflow integrity. Hopefully, I didn't miss any other nodes.

Added a toggle to swap the tile size dimensions on USDU below the 1st USDU group.

Added a Multiply Tile Size option under USDU for my own testing purposes.

  • This is probably similar to USDU's tile_padding setting, but I have tested it with reducing the tile_padding setting and setting the multiplier to 1.2 with some good results.

  • This is bypassed by default.

v9 tested on Better Days with Touching Grass, and this PornMaster lora.

This workflow should work with any SDXL, Pony, Illustrious, and NOOB (NAI) models.

Layout changes primarily on the left side of the workflow.

  • Most of these changes stemmed from someone mentioning lora clip weights and concat conditioning. I tried both. Didn't care for the clip weights thing and concat had some interesting results, but not consistent enough for me to want to use it in the workflow.

  • I had spent a good amount of time trying out some lora loader alternatives to ImpactWildcardEncode. Unfortunately, this caused Image Saver to no longer detect any loras being used, even when utilizing Widget to String to send the model names to Image Saver.

  • This led to trying alternatives to Image Saver, but as expected, they lacked in areas where Image Saver performs better for my needs.

  • Dropped FreeU_V2. While it can be really useful, it's not consistently good. I think there's enough extra settings in my workflow that I don't need this too.

  • Added all the Sampler Selectors from ComfyUI-ppm. At least one of them I haven't seen mentioned on their project page, but I encourage you to try them out if you like trying new samplers.

There is now a toggle for 4 sampler groups.

  • Standard Samplers

  • CFG++ Sampler Select (from ppm)

  • Dyn Samplers (from ppm)

  • PPM Samplers (from ppm)

  • Ideally, just have one toggled on. If you have multiple groups toggled on, it's going to use the sampler group that is first in numerical order.

Bookmarks 1 through 5 will take you to what I consider the important parts of the workflow.

Tweaked the tile size settings to be opposite of what the initial image resolution is. This has yielded good results for me and makes it so I no longer need to use Half Tile mode to be satisfied with the results.

Workflow file has been saved with the settings I used with Euler + Karras to generate one of the sample images.

v8e changes:

Dynamic Thresholding

  • Adjusted the connections to this node again based on (new to me) information found while looking for the best settings for this node.

  • If you are getting nightmare fuel while using this, try adjusting the threshold_percentile. Adjusting it up make the node "clamp" down on the image more aggressively, while lowering it will make it less aggressive. I recommend using 0.9 to 0.99

  • Adjust the minimum CFG to match whatever the lowest value is of the sampler/scheduler combo you use. I'll include a couple versions of the workflow using a standard sampler and a CFG++ sampler. If your images are cooked, try lowering this value or adjust the "CFG" value near the KSampler.

Watermark Detection & Removal flow

  • Added a SAM detector node to the flow. This has given me better results and just uses the same SAM Loader from the Detailer portion of the workflow.

  • Disclaimer: it doesn't always detect the watermarks, but it's better than nothing.

  • There is a method that involves much less nodes using CLIPSeg, but in my experience, it did not work well.

Dropped Lying Sigma Sampler from the Detail Daemon group.

  • Cool idea to have it, but it's too much for Illustrious IMO.

Moved all the bypass switches to the beginning of the workflow.

Switched the negative prompt node back to how it was previously instead of using ImpactWildcardEncode for it.

The settings on the workflows in the .zip file are what I used for the first two images in the samples for this version. (Since someone asked how they can make their images look like mine). You can toggle the options you don't want to use if you just want to go vanilla.

v8d changes:

Dynamic Thresholding

  • Adjusted the values so that they should work properly with "standard" samplers.

  • Changed a connection to the node. Now it should scale properly.

  • Images should be less saturated when using it now.

  • This has led to some nice improvements on image results (IMO).

Watermark Detection & Removal flow

  • Changed it so the groups can both be toggled off with one button

Disclaimer: depending on your local install of ComfyUI, this workflow may not function correctly for you due to an endless amount of factors where your ComfyUI install may differ from mine.

Version 8c changes:

As always - newer does not necessarily mean better.

Generation time using the default settings on the workflow: 167 seconds on a 5060ti 16GB

Generation time with extras enabled: 265 seconds

Added several Group Bypass nodes throughout the workflow:

  • These will allow you to toggle things on and off throughout the workflow before you start the generation process.

  • The main reason for this was that I initially was trying to figure out how to utilize the Context Switch from rgthree’s custom nodes.

  • The reason for that is I wanted two options at the detailer portion of the workflow and to reduce the amount of manually toggling a bunch of things just for this one part.

  • Option 1: use the 2 usual FaceDetailer nodes that I already had in the workflow.

  • Option 2: use a combined Detailer node that will do it all at once.

  • Everything on the smaller Group Bypass nodes throughout the workflow can be bypassed. So if you’re just wanting to use the initial KSampler, now you can easily.

Smoothed Energy Guidance (SEG) has been added for the initial image generation only:

  • Having it connected to the whole workflow slows it down too much.

  • What does it do?

  • It “influences how the model generates images, resulting in enhanced image detail, realism, and composition.”

  • It’s supposed to be more stable than Perturbed Attention Guidance, but who knows?

FreeU V2:

  • Settings have been adjusted to fit results I liked, but can easily be changed or bypassed.

Dynamic Thresholding has been added back in and is bypassed by default.

FreeU V2 and Dynamic Thresholding - Post (after) KSampler:

  • There will be a 2nd set of these two nodes specifically for everything after the 1st KSampler.

  • This was done to exclude SEG from the rest of the workflow.

Ultimate SD Upscale:

  • There is a node next to each USDU that is for CFG called Add/Subtract “b” from CFG. These particular nodes are set to add or subtract 2 from the initial CFG setting. This is supposed to affect upscaling via USDU in a positive way.

  • If adding more CFG, then less steps should be used and vice versa. To toggle Add or Subtract, just click on the “op” field on the node and select either Add or Subtract.

  • Added an “upscale with model” group between the two USDU ControlNet groups.

  • This group will upscale the output from USDU 1 and then downscale it back to the original starting image size. Then it will be upscaled again in USDU 2 and typically will come out nice than just feeding the image directly from USDU 1.

  • I have had pretty good success with this and it does not add too much time, unless your upscale model is one of those that take forever to load.

Color Match and Color Correct can now be bypassed individually without breaking the flow.

Mahiro has been removed.

Hoping this will be the last workflow update for a while. *fingers crossed”

Image Comparison Examples with different settings and "features" enabled.

Settings listed are prior to upscaling.

Positive Prompt:

masterpiece, best quality, absurdres, detailed skin, mole, freckles, wrinkle, detailed background, chiaroscuro, 1girl, side ponytail, white hair, leaning forward in chair, cowboy shot, diagnostic equipment, dark laboratory, wires, glowing pupils, cybernetic eye, black sclera, glaring, glowing eye, glowing nails, glowing fingerpads, long hair, medium breasts, cybernetic arms, sitting, chair, cowboy shot, titanium, cyborg, wire, from side, exposed wires, reaching towards viewer, glowing parts, electricity, big breasts, perky breasts, love handles, belly, big butt, skindentation, head-mounted display on head, smoke, haze,

Negative Prompt:

bad quality, worst quality, polydactyly, shiny skin, western comics \(style\), mannequin, nude, 

Sampler: euler_ancestral

Scheduler: beta

Steps: 35

CFG: 6

Seed: 12345

Sampler: euler_ancestral

Scheduler: beta

Steps: 35

CFG: 10

Seed: 12345

With FreeU_V2, SEG, Dynamic Thresholding, and ControlNet (Tile and Canny Edge) enabled.

Sampler: euler_ancestral

Scheduler: beta

Steps: 35

CFG: 10

Seed: 12345

With FreeU_V2, SEG, Dynamic Thresholding, and ControlNet (Tile and Canny Edge) enabled and Half Tile seams fix mode enabled on USDU.

v8b changes:

Added:

FreeU v2:

  • This node has been around for a while as part of Comfy Core. I still don’t understand how the settings work, but apparently the default settings are meant for SDXL.

  • It does definitely affect the image output. If you are interested in what the settings do, then a simple Google search or ask your preferred AI.

  • I use it with lower CFG settings than normal since enabling it seems to cause cooked images (for me).

  • It seems to push the image toward an anime output, but that could just be me.

  • This node is disabled by default.

ControlNet:

  • Added some basic ControlNet functions to the KSampler and both USDU nodes.

  • The nodes involved will require comfyui_controlnet_aux and Comfyroll Studio.

  • The KSampler ControlNet group will have a Load Image node, 3 AIO Aux Preprocessor nodes, 1 CR Multi-ControlNet Stack node, and 1 CR Apply Multi-ControlNet node.

  • The KSampler ControlNet group is bypassed by default.

  • There used to be an issue with not having an image in the Load Image node that would stop a workflow from working. AFAIK this has been fixed. If not, the fix is to put any random image there.

  • Each USDU ControlNet group is the same as the KSampler ControlNet group, but without the Load Image node.

  • If you don’t want to use ControlNet, you can just delete these groups from the workflow or bypass them.

  • I am using it for the purpose of using the TTPlanet function built into the AIO Preprocessor in conjunction with ControlNet Union (this can be found in ComfyUI-Manager under Model Manager. Just search for “union” when filtering for ControlNet models. Either non-flux version should work.

Generation speed is still around 3 minutes for me from start to finish, even with ControlNet enabled.

Removed:

Guidance Limiter:

Didn’t feel it was worth keeping. If you liked it, you can just re-add it easily. It’s part of ComfyUI-ppm.

v8 changes:

Note: newer version does not mean it’s better, it’s just what I am using/experimenting with currently.

Adjustments were made to work with the experimental Distance sampler.

TL;DR for this sampler:

“A custom experimental sampler based on relative distances. The first few steps are slower and then the sampler accelerates (the end is made with Heun). The idea is to get a more precise start since this is when most of the work is being done.”

  • Uses a low amount of steps (4 to 10) and is recommended by the author to use 7 steps with AYS or Beta schedulers. (You can always try other schedulers too. YMMV.)

  • A complete explanation of this sampler can be found on the project page.

  • Note: this particular sampler does not seem to work with v-pred models (at least not on Lobotomized Mix).

  • Installing the Distance sampler also adds a couple cfg++ samplers that I have not tested.

Image generation from start to finish on a 5060ti 16GB takes roughly 3 minutes on the settings I used for the sample images.

Settings will need to be adjusted to fit your preferences (as always).

Using a different sampler/scheduler combo and switching USDU seams_fix_mode to “None” can speed up the process greatly.

Added:

Mahiro - “to make CFG less dumb”. As quoted here.

Guidance Limiter from the ComfyUI-ppm custom nodes which is an implementation of this.

  • As far as settings for this go. I am just leaving them at the default. The project page does not appear to have any related instructions for the two settings.

Boolean switches above the KSampler and USDU nodes for toggling Detail Daemon on and off.

  • These are toggled to “true” by default.

Removed:

2nd KSampler

  • This was not beneficial enough for me to keep in the workflow.

  • It seemed to make the image worse in most cases.

Perturbed Attention Guidance has been removed.

  • It was not beneficial enough for me to keep it.

  • 30% Slower generation time for a possibly better result

CFG++Sampler Select is toggled off by default. You can toggle it on by using the Boolean Switch directly above the node. Toggling it off will switch over to using the Sampler Selector (Image Saver) node.

  • Please ensure you adjust any other parameters as needed, such as CFG, Steps, etc.

v7c changes:

Added Sampler/Scheduler Settings (JPS) node to handle sampler/scheduler selection in one place.

  • Why? I am lazy and don't want to have to change this in multiple places.

  • If this ends up causing you issues, then v7b2 is probably the better option for you.

Added Fast Groups Bypasser (rgthree) to the left side of the workflow.

  • Bypass stuff at your own discretion.

Changed 2nd KSampler to allow Hi-Res fix-like function (again).

  • You can upscale on this KSampler if you like by changing the "Upscale by" setting to anything above 1. Just keep in mind that the USDU nodes are going to upscale on top of that if you do not bypass them.

All of the Steps from the 1st USDU node to everything after are set on a single node.

  • You can remove the connecting noodles from the inputs of each "Step" node if you want.

Removed one ImpactWildcardEncoder node.

  • Seemed pointless for negative prompt. I doubt people are loading loras and wildcards in their negative prompt, but I could be wrong.

v7b2 changes:

Removed the Scheduler Selector (Comfy) (Image Saver) node

  • At least one person has been having repeated issues with this node.

  • Removing this makes it so you have to pick the scheduler individually throughout the workflow, but otherwise not much of a user side change.

ComfyUI-ppm

  • If you are having issues with ComfyUI-ppm, try deleting it from your comfyui custom nodes folder and reinstall it again. The owner of the ComfyUI-ppm project just did a fix for an ImportError issue at around 8:20PM US Eastern time.

If the issues with ComfyUI-ppm remain after (and you still want to use this workflow), you can remove the following nodes from the workflow, uninstall PPM, and then restart your ComfyUI:

Above the green ImpactWildcardEncoder node:

  • ClipTokenCounter

  • Token Count

To the left of the dark blue SamplerCustomAdvance:

  • Use CFG++SamplerSelect? Boolean switch

  • CFG++SamplerSelect

Below the cyan Sampler Selector (Image Saver) node:

  • Widget to String node directly below the Sampler Selector (Image Saver) node

  • Switch Sampler

  • Switch Sampler name

I will also include an extra JSON in the zip file in case you are not comfortable deleting nodes from the workflow. However, you are on your own to uninstall ComfyUI-ppm from your custom nodes folder.

v7b changes:

Workflow assembled from scratch.

  • No copy/paste or holding alt and clicking to copy nodes.

Perturbed Attention Guidance is bypassed by default. You can enable it by clicking on it and either press ctrl-B on your keyboard or click on the bypass icon.

  • Bypassing it does save time on generation speed.

  • Turning it on adds about 10 to 15 seconds on a 832 x 1216 image on a single KSampler on a RTX3060 12GB.

  • In terms of the entire workflow, you would be adding an estimated 5 to 8 seconds on the 2nd KSampler, each tile on USDU since they are set to 10 steps by default, and for each detected face/eyes/etc on the Detailer nodes.

  • PAG can help make the image look better, but is the extra time worth it to you for a possibility?

V7a bandaid changes:

Added a modified version of 7a without the Image Saver node for those who upgraded their comfyui to v3.29:

  • v7a_bandaid is a placeholder until there is a working solution from the custom node creators. The comfyui folks have more or less stated this is an intentional change.

  • Metadata sources/info will have to be added manually (if you care).

v7a changes:

  • re-added ComfyLiterals

  • I have been running into some issues with some values being changed. This is happening on most of the number fields that have arrows to adjust the values up and down.

  • Example of where this caused problems: Setting "upscale_by" to 2 on the USDU node would change it to 2.0000000001. This would cause the node to round up and require additional tiles to be used in the upscale process.

  • Another example: setting the detection threshold for watermark detection to 1 would end up being set to 1.0000000001. This would lead to an error in the workflow since the maximum value is 1.

  • ComfyLiterals provides a means to adding number values without the issues mentioned above.

v7 changes:

  • Recreated the workflow from scratch and made layout changes.

  • Image Comparer nodes are placed throughout the workflow instead of at the end.

  • Dropped the Perturbed Attention Guidance node used in the previous workflow and have switched it to the simple version that is included in Comfy Core.

  • There should no longer be any hidden nodes other than the bookmark nodes.

  • Dropped ComfyLiterals. These nodes seemed to have caused issues for at least one person.

  • Dropped Dynamic Thresholding.

  • Sampler, Scheduler, and CFG settings are all connected to the initial image settings now.

  • Removed the upscale setting from the 2nd KSampler.

  • Removed the tile size switch from USDU. Half-tile is enabled by default. Set this to None if you want to speed up the upscale process in USDU.

  • Changed KSamplers from SamplerCustom to SamplerCustomAdvanced. This allows ALL of the samplers on CFG++SamplerSelect to be used now. (at least for me).

Workflow default settings use Euler A sampler settings with everything enabled.

If any groups are marked DNB on the workflow, they cannot be bypassed without you making adjustments to the workflow yourself.

Other Info:

Thanks to @killedmyself for introducing me to the Color Correct node from comfyui-art-venture. This has really been useful in countering the color fade from Ultimate SD Upscale.

I only use the Contrast option for that node, but feel free to adjust to your liking.

Disclaimer: Please be aware that sometimes things break when updates are made by comfy or by the custom node creators.

"Load Lora" node is not needed. To use a lora, please use the "Select to add Lora" option on the "Positive Prompt" node. You can specify the weights just like in A1111 or similar interfaces.

Note: The fix for the apply_gaussian_blur error (courtesy of @Catastrophy ): "The problem lies currently within the github project "TTPlanetPig / Comfyui_TTP_Toolset". In one commit the function called "apply_gaussian_blur" was removed, although is is still used in the project. the workaround is described in Issue#15. It mentions to restore a function. To do this you have to manually edit one file in the comfyuifolder, save it and restart comfyui."

Note: if your prompts seem like they being completely ignored, please make sure to check if the "Mode" on the prompt nodes are set to Populate and not Fixed or Reproduce.

If you are running into an issue where your number values are being changed from something like "0.25" to "0.25000000001", try toggling on Disable default float widget rounding in the comfyUI settings under Settings>Lite Graph>Node Widget. Thanks to @DraconicDragon for the info!

v5d changes:

  • USDU is not connected to Detail Daemon

  • Nodes that were hidden behind other nodes are no longer hidden (probably).

  • Sample images we done using the new (to me) sampler: er_sde

v5c changes:

Dropped the Color Match node before the USDU nodes.

  • Nice feature, but not being able to bypass it was pretty annoying for me.

  • Using the Color Correct node at the end of the workflow works good enough and it can be bypassed.

Dropped ControlNet and IMG2IMG.

  • I do not use these enough to justify making everyone deal with the hassle of putting an image in the Load Image nodes. If you liked those functions, you can easily add those into this workflow or continue using v5b and older versions.

Added Dynamic Thresholding back in.

  • If you are not familiar with how to use this node, you can just leave it disabled or read up on it here.

  • TL;DR this allows you to use higher CFG values while it mimics whatever value you put in on the node. (e.g. CFG 10 on the normal CFG setting with CFG 6 on the Dynamic Thresholding node.)

v6 changes:

Stripped down the workflow a bit and changed the upscaling process. I wanted to remove the functions that I hardly (or never) use. I do not plan on adding any extra functions to this version.

Dropped IMG2IMG, ControlNet, and Ultimate SD Upscale from the workflow.

  • If you like those functions, please continue to use the previous workflows or modify this one to include them.

  • This version requires less custom nodes compared to before.

Replaced USDU with Iterative Upscaling (from Impact Pack).

The benefits: upscaling is more stable.

The drawbacks:

  • Not faster and can be slower depending on settings.

  • Less details (in my opinion).

  • Does not work with CFG++SamplerSelect or Detail Daemon. Those two nodes will only affect the initial KSampler.

Added Dynamic Thresholding back in.

  • If you are not familiar with how to use this node, you can just leave it disabled or read up on it here.

On my 3060:

  • Using Euler A the full workflow takes 170 seconds from start to finish.

  • Using Euler A CFG++ takes 162 seconds from start to finish. (due to less initial steps needed).

This will be a trial run of this workflow. Not 100% committed to this one yet.

v5b changes:

Edit: Updated the demo_settings version with the correct upscale settings on the 2nd KSampler. (0439 US Eastern Time 24 Feb 2025). It was set to 2 instead of 1 for the Upscale Factor.

Dropped ComfyUI-Adaptive-Guidance

  • Did not seem beneficial enough to keep in the workflow

  • To make full use of it, I would have to create a toggle for the normal node and the negative node version at a minimum.

  • I got better results when just using a standard guider node in many cases.

Added a switch from ComfyUI_Comfyroll_CustomNodes that allows the IMG2IMG group to be bypassed.

  • This node just changes the latent source going into the first KSampler.

  • You will still have to have an image placed in the Load Image node AFAIK, but you can try not having one there and see if it works.

Added a switch to allow for either latent upscaling or upscaling image with model below the 1st KSampler.

  • This affects what latent source feeds into the 2nd KSampler.

  • The 2nd KSampler by default is set to 1x Upscale, but you can adjust it to a higher number. I use it as a 2nd pass KSampler.

v5a changes:

Added ComfyUI-Adaptive-Guidance

  • This will only affect the 2 KSamplers at the start of the workflow.

  • Cannot be bypassed. You can remove this from the workflow if it is not for you. Just make sure to add a Guider node of some sort and connect it to the KSamplers or the workflow will be broken.

  • I discovered this node while looking up settings for "specialized" samplers.

Added ControlNet

  • This is connected to the 1st KSampler and can be bypassed.

  • You might have to put an image in the Load Image node even if you are not using the ControlNet Group.

  • Utilizes an All-in-One processor node from comfyui_controlnet_aux

  • The AIO processor node will download any missing processor files based on what you select on the node to use (at least for me it did).

  • You need to download a ControlNet model to use in the LoadControlNet Model node. I am using ControlNet-Union (promax version) which can be downloaded from here.

Re-added MaHiRo (ComfyUI v3.8+)

  • I used it in the demo images for v5a

  • If you are on an older version of comfy, you can just delete it from the workflow

  • It can be bypassed

v5 changes:

Dropped ComfyUI-APG_ImYourCFGNow from the workflow.

  • Ran into an issue when using FaceDetailer on certain setting that would break the workflow. Performed testing using non-custom nodes with FaceDetailer and added in custom nodes one by one. APG node was found to be the cause.

Dropped sd-dynamic-thresholding.

  • While testing the v5 workflow with/without this, I found the results to be better without it.

Dropped MaHiRo

  • It seemed to do the opposite of it's function for me.

Added sd-perturbed-attention

  • If you want to know what it does, the paper on it can be found here.

  • My experience so far with PAG seems to be that you should take the total of your CFG + PAG scale to equal what you would normally use for your CFG setting by itself.

    Example: if you are using a sampler from CFG++SamplerSelect, then your total should equal 2. CFG 0.5 + scale 1.5 = 2.

    So if you are using a normal sampler that would be something like CFG 6, then you could do something like CFG 3 + scale 3.

    This is just from a non-technical standpoint and personal testing, so I could still be wrong.

Most of the groups in the workflow can be bypassed again since this workflow is not using the SET/GET nodes.

  • The tradeoff is oodles of noodles.

Less Noodles (a) version changes:

Re-added ComfyUI-ppm back in.

  • This adds in adjusted CFG++ samplers and some additional schedulers as well.

  • Thanks to @Catastrophy , the samplers for this now will save automatically to the metadata in Image Saver.

  • A toggle has been added for this in case people want to use the normal samplers.

Added APG I'm Your CFG Now

  • I wanted to give this a whirl since it was mentioned in Lobotomized Mix's description.

Added MaHiRo to the workflow. This is a test/beta node pre-installed on ComfyUI as of ComfyUI v0.3.8

I actually broke my comfy install while working on this version thanks to updating comfy. As a result, I had to do a clean install.

ComfyUI Manager did not install all of the Missing Custom Nodes in one session.

  • I had to run the Install Missing Custom Nodes function in two separate sessions of ComfyUI. (as in I ran the Install Missing Custom Nodes function, restarted comfy, ran it again, and restarted comfy again.)

My experience with the reinstall was that USDU did not want to import properly.

  • I had to clone it into a folder outside of my comfy install and then take that new USDU folder and paste it into my comfyui/custom nodes folder.

v4g Less Noodles Test changes:

Trying out the Set/Get Nodes from ComfyUI-KJNodes at the suggestion of @Catastrophy

  • As the version name suggests, these help clear up the workflow (visually).

  • Disclaimer: this is a test version and should work just like v4g except that bypassing Dynamic Thresholding and Detail Daemon break the flow.

  • If you don't like using Dynamic Thresholding and/or Detail Daemon, I would suggest sticking to v4g or you can adjust the workflow to your taste.

  • Removed the Alternative Watermark Removal portion from the workflow.

v4g changes:

Generation time from start to finish on a 3060 RTX:

  • With the default settings and most of the extra stuff turned off: 165 seconds (including checkpoint loading time).

  • With everything enabled: 200 seconds (including checkpoint loading time).

Added Image Saver nodes back in and dropped ComfyUI-ImageMetadataExtension

  • This was primarily due to compatibility issues with the SamplerCustom node.

Added ImpactWildardEncode nodes back in and dropped the split positive prompt nodes and Efficiency Nodes.

  • This is for compatibility with the Image Saver nodes and to keep the lora/embedding info present in the resources shown when posting to civitai.

The first KSampler has been swapped to SamplerCustom

  • This was a choice based on preference and wanting to be able to use ComfyUI-Detail-Daemon from the beginning of the process.

Added a 2nd USDU (ComfyUI_UltimateSDUpscale) node for a 2nd pass.

  • IMO, USDU seems to strip out some details on the 1st pass.

  • The 2nd pass seems to help add details back in.

v4f changes:

Generation time from start to finish on a 3060 RTX:

  • Everything but wildcards enabled took 133 seconds.

  • Bypassing all the extras and doing 2x upscale in the 2nd KSampler took 73 seconds.

  • Testing used euler_ancestral_cfg_pp (sampler) + karras (scheduler) on lobotomizedMix_v10 (v-pred model).

Adjusted the prompting order. It seems to give better results. YMMV.

v4e changes:

Removed the 1st USDU node and replaced it with a KSampler (Sampler Custom)

  • This node upscales via the same upscaler model as USDU

  • Added a node to pick the upscale factor without you having to do the math. (Example: if your initial image is 1024x1024 and you set the Scale Factor to "2" this KSampler will upscale it to 2048x2048).

  • Node for selecting denoise for this node has been added as well to keep it separate from the USDU settings for denoise.

Watermark Removal

  • Split up the nodes to be just like how they are grouped in the workflow it originally came from.

  • Added the alternative version back in for those who want to use it. (Personally, I will stick to what I know works.)

Added Fast Groups Bypasser from rgthree

  • This allows you to toggle groups on/off in one place and also provides a way to go to any group by clicking on the arrow button.

Added a Detailer Group after the 1st Upscale.

  • This can be bypassed if you don't want to use it.

Dynamic Thresholding and Detail Daemon are set to bypass by default.

  • If you like using these (I do), then just re-enable them and adjust your parameters accordingly.

v4d changes:

Return of the old watermark flow.

  • was-node-suite-comfyui is required.

  • I use this watermark detector model, which can be found here.

  • Another detection model that is more aggressive can be found here.

Added a Seed Generator node to use the same seed across the workflow.

  • The only exception is the wildcard node. If you want to fix the seed on that node, you will have to do it manually. Having it connected to the seed generator node caused the same image to be recreated even when not set to "fixed". YMMV, but that was my experience with it.

The ModelSamplingDiscrete node has been added back in for folks using v-pred models.

  • You may or may not need it. It will be set to bypass by default.

Bookmarks have been reduced to 6.

  • They are set in a way that fits a 2560x1440 monitor, so if this does not work for you, you can delete them or ignore them.

v4c changes:

Added notes to pretty much everything on the workflow.

Trimmed down the Watermark Removal portion of the workflow thanks to a random person on the civitai discord providing a better one. No need for a detection model anymore. Yay!

  • This didn't work out. It would work sometimes, but other times it would destroy the picture. Re-added the old watermark removal in v4d.

Changed upscaling to use two USDU nodes. First to 1.5x, the second to roughly 2x.

  • Allegedly, this results in more detail (and I love details).

  • You can use a 2nd KSampler instead of the first USDU node, but that's up to you.

  • More re-arranging.

  • If you don't like spaghetti, install ComfyUI-Custom-Scripts. Go into your comfy settings, find "pysssss" on the menu. Click it. Find LinkRenderMode. Click on the dropdown menu in that section and pick "Straight" OR you can find a solution to hide them. I know that setting exists somewhere.

  • Added more bookmarks: up to 7 now.

The default settings in v4c upscaling is set to 1.5 and 2x (of the original image) in USDU. This has given me better results as far as quality goes, but can easily be toggled off if it's not for you.

v4b changes:

Added some QoL nodes.

Bookmarks added numbered 1 through 4 in places I thought were useful.

  • Just press 1, 2, 3, or 4 while not in a place where you input text/numbers to try them out.

Added a new (to me) Save Image node that does show the models/loras used when uploading to civitai.

v4a changes:

Added option to use Wildcards.

If you don't want to use wildcards, just click on the ImpactWildcardProcessor node and click CTRL+B to bypass OR make sure the upper text box of the node is empty. The better option is to use CTRL+B (or delete the node).

Other than that, some QoL changes and rearranging of the nodes.

v4 changes:

I am no longer using the Image Saver nodes as of v4. I tried to streamline the workflow and keep the features that I found the most useful. This workflow took inspiration from v3 and from the workflow that the author of NTR Mix had on some of their example images.

The current settings are to my preferences. You will need to adjust if you plan on using different samplers, etc.

The upscaling is set to 2x with half tile enabled in USDU. This has given me better results as far as quality goes, but can easily be toggled off if it's not for you.

Dropped ControlNet completely, it's not for me. (v3 and earlier has it)

With the current settings, I generate an image from start to finish in about 90 seconds on a 3060 RTX.

v4 uses Efficient Loader for the checkpoint/model and VAE. For loras it uses Lora Stacker. Both of those come from efficiency-nodes-comfyui.

Actually Simple:

Added a no-frills workflow for those who really just want to keep it simple, but want a little (very little) more than the default workflow. Check the "about" section off to the right for link to the two custom node packages required.

Older versions are no longer available since they were purged when I removed a lot of my old NSFW images.