I recommend the "Actually Simple" version if you don't want to play with extra settings and want to "keep it simple". Sometimes simple is best!
This is the best option if you don't feel comfortable installing a ton of extra stuff and/or have not updated your comfyui and any associated python dependencies in a long time.
v7b and 7b2 workflow built and tested on:
ComfyUI 0.3.29
ComfyUI frontend v1.16.8
Python version: 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
Pytorch version: 2.6.0+cu126
Disclaimer: If you are using a different version than of any/all of those listed above, then this workflow may not work for you. I can't account for every difference, since we are all potentially using different versions of something.
v7b2 changes:
Removed the Scheduler Selector (Comfy) (Image Saver) node
At least one person has been having repeated issues with this node.
Removing this makes it so you have to pick the scheduler individually throughout the workflow, but otherwise not much of a user side change.
ComfyUI-ppm
If you are having issues with ComfyUI-ppm, try deleting it from your comfyui custom nodes folder and reinstall it again. The owner of the ComfyUI-ppm project just did a fix for an ImportError issue at around 8:20PM US Eastern time.
If the issues with ComfyUI-ppm remain after (and you still want to use this workflow), you can remove the following nodes from the workflow, uninstall PPM, and then restart your ComfyUI:
Above the green ImpactWildcardEncoder node:
ClipTokenCounter
Token Count
To the left of the dark blue SamplerCustomAdvance:
Use CFG++SamplerSelect? Boolean switch
CFG++SamplerSelect
Below the cyan Sampler Selector (Image Saver) node:
Widget to String node directly below the Sampler Selector (Image Saver) node
Switch Sampler
Switch Sampler name
I will also include an extra JSON in the zip file in case you are not comfortable deleting nodes from the workflow. However, you are on your own to uninstall ComfyUI-ppm from your custom nodes folder.
v7b changes:
Workflow assembled from scratch.
No copy/paste or holding alt and clicking to copy nodes.
Perturbed Attention Guidance is bypassed by default. You can enable it by clicking on it and either press ctrl-B on your keyboard or click on the bypass icon.
Bypassing it does save time on generation speed.
Turning it on adds about 10 to 15 seconds on a 832 x 1216 image on a single KSampler on a RTX3060 12GB.
In terms of the entire workflow, you would be adding an estimated 5 to 8 seconds on the 2nd KSampler, each tile on USDU since they are set to 10 steps by default, and for each detected face/eyes/etc on the Detailer nodes.
PAG can help make the image look better, but is the extra time worth it to you for a possibility?
CFG++Sampler Select is toggled on by default. You can toggle it off by using the Boolean Switch directly above the node. Toggling it off will switch over to using the Sampler Selector (Image Saver) node.
Please ensure you adjust any other parameters as needed, such as CFG, Steps, etc.
There are a couple nodes above the Positive Prompt for token count. The node that shows the count is kind of buggy resulting in showing two lines of token counts when running more than one generation. If you don’t want to worry about this, you can just delete the two nodes and it will not affect the workflow.
I dropped all the little notes from the workflow. After rebuilding the workflow from scratch, they are the only thing not added compared to the old workflow.
If you are wanting more information on the various settings/parameters for each of the custom nodes, you can visit the project pages of each one.
If you disconnect any of the noodles or bypass something you should not have, then something will probably break and result in errors.
Feel free to remove and add to the workflow to fit what you want. If my workflow helped you in any way, then great.
Settings will need to be adjusted to fit your preferences unless you are trying to generate images like mine. The default settings are not meant for speed.
Custom Nodes used in this workflow as of v7b2:
ComfyUI-Impact-Subpack (the Ultralytics Provider node was moved to here)
If I missed anything or listed something not on the workflow any longer, then my bad.
You should get a warning of what is missing when you load up the workflow. Impact-Pack seems to have to be manually git pull-ed for updates. (At least for me).
The workflow should work with Pony and SDXL models as well, but I have not tested on those personally.
This has been tested on the models mentioned in my "Suggested Resources" below. YMMV. Try playing with the settings/prompts to find your happy place. Current settings are to my tastes. Adjust to your tastes/preferences accordingly!
Asking for help:
I'm just someone who uses ComfyUI and I am not a developer. If you have technical questions, I probably can't help you. I make heavy use of Google when I don't know why something breaks or want to know what something does. However, I will try to help you within the best of my abilities for any non-technical questions.
In the cases where you need/want help: please do not be vague
Provide links to screenshots (if possible).
Don't be a jerk. (I am not obligated to help you. This workflow is intended for my personal use, and I am sharing it freely).
V7a bandaid changes:
Added a modified version of 7a without the Image Saver node for those who upgraded their comfyui to v3.29:
v7a_bandaid is a placeholder until there is a working solution from the custom node creators. The comfyui folks have more or less stated this is an intentional change.
Metadata sources/info will have to be added manually (if you care).
v7a changes:
re-added ComfyLiterals
I have been running into some issues with some values being changed. This is happening on most of the number fields that have arrows to adjust the values up and down.
Example of where this caused problems: Setting "upscale_by" to 2 on the USDU node would change it to 2.0000000001. This would cause the node to round up and require additional tiles to be used in the upscale process.
Another example: setting the detection threshold for watermark detection to 1 would end up being set to 1.0000000001. This would lead to an error in the workflow since the maximum value is 1.
ComfyLiterals provides a means to adding number values without the issues mentioned above.
v7 changes:
Recreated the workflow from scratch and made layout changes.
Image Comparer nodes are placed throughout the workflow instead of at the end.
Dropped the Perturbed Attention Guidance node used in the previous workflow and have switched it to the simple version that is included in Comfy Core.
There should no longer be any hidden nodes other than the bookmark nodes.
Dropped ComfyLiterals. These nodes seemed to have caused issues for at least one person.
Dropped Dynamic Thresholding.
Sampler, Scheduler, and CFG settings are all connected to the initial image settings now.
Removed the upscale setting from the 2nd KSampler.
Removed the tile size switch from USDU. Half-tile is enabled by default. Set this to None if you want to speed up the upscale process in USDU.
Changed KSamplers from SamplerCustom to SamplerCustomAdvanced. This allows ALL of the samplers on CFG++SamplerSelect to be used now. (at least for me).
Workflow default settings use Euler A sampler settings with everything enabled.
If any groups are marked DNB on the workflow, they cannot be bypassed without you making adjustments to the workflow yourself.
Why do I use the Color Correct?
Upscaling with KSampler/Ultimate SD Upscale strips/alters the color from the original image (at least for me).
Watermark Removal
Why do I have this in the workflow?
While rare, they do still happen and I don't like having to give up on a good image because of a watermark ruining it for me.
Altering any of the settings in the Watermark portion of the workflow will probably break the watermark removal. All that should be changed there is:
Detection Threshold (higher = less detection, lower = more aggressive detection)
Watermark Detection Model (use whichever one you prefer)
Steps, scheduler, denoise on the Watermark Remover node can be adjusted.
Anything else, I do not recommend messing with in the watermark portion of the workflow. I didn’t come up with it and cannot advise you on what all the buttons, numbers, and settings will do.
Upscale Model:
You should be able to use whatever upscale model you like best. I primarily use NMKD YandereNeo (4x) now.
FaceDetailer Models:
If I recall, Impact Pack includes the needed models to get you started, but if you want something else, you can find more by using ComfyUI Manager's "Model Manager" option. The two types of models needed will be "Ultralytics" and "sam".
The model I use is no longer on civitai. Looks like the person got banned (last I checked).
VAE Model:
I usually use the normal SDXL VAE or whatever is baked into the checkpoint models.
Other Info:
Thanks to @killedmyself for introducing me to the Color Correct node from comfyui-art-venture. This has really been useful in countering the color fade from Ultimate SD Upscale.
I only use the Brightness, Contrast, and Saturation options for that node, but feel free to adjust to your liking.
Disclaimer: Please be aware that sometimes things break when updates are made by comfy or by the custom node creators.
"Load Lora" node is not needed. To use a lora, please use the "Select to add Lora" option on the "Positive Prompt" node. You can specify the weights just like in A1111 or similar interfaces.
Note: The fix for the apply_gaussian_blur error (courtesy of @Catastrophy ): "The problem lies currently within the github project "TTPlanetPig / Comfyui_TTP_Toolset". In one commit the function called "apply_gaussian_blur" was removed, although is is still used in the project. the workaround is described in Issue#15. It mentions to restore a function. To do this you have to manually edit one file in the comfyuifolder, save it and restart comfyui."
Note: if your prompts seem like they being completely ignored, please make sure to check if the "Mode" on the prompt nodes are set to Populate and not Fixed or Reproduce.
If you are running into an issue where your number values are being changed from something like "0.25" to "0.25000000001", try toggling on Disable default float widget rounding in the comfyUI settings under Settings>Lite Graph>Node Widget. Thanks to @DraconicDragon for the info!
v5d changes:
USDU is not connected to Detail Daemon
Nodes that were hidden behind other nodes are no longer hidden (probably).
Sample images we done using the new (to me) sampler: er_sde
v5c changes:
Dropped the Color Match node before the USDU nodes.
Nice feature, but not being able to bypass it was pretty annoying for me.
Using the Color Correct node at the end of the workflow works good enough and it can be bypassed.
Dropped ControlNet and IMG2IMG.
I do not use these enough to justify making everyone deal with the hassle of putting an image in the Load Image nodes. If you liked those functions, you can easily add those into this workflow or continue using v5b and older versions.
Added Dynamic Thresholding back in.
If you are not familiar with how to use this node, you can just leave it disabled or read up on it here.
TL;DR this allows you to use higher CFG values while it mimics whatever value you put in on the node. (e.g. CFG 10 on the normal CFG setting with CFG 6 on the Dynamic Thresholding node.)
v6 changes:
Stripped down the workflow a bit and changed the upscaling process. I wanted to remove the functions that I hardly (or never) use. I do not plan on adding any extra functions to this version.
Dropped IMG2IMG, ControlNet, and Ultimate SD Upscale from the workflow.
If you like those functions, please continue to use the previous workflows or modify this one to include them.
This version requires less custom nodes compared to before.
Replaced USDU with Iterative Upscaling (from Impact Pack).
The benefits: upscaling is more stable.
The drawbacks:
Not faster and can be slower depending on settings.
Less details (in my opinion).
Does not work with CFG++SamplerSelect or Detail Daemon. Those two nodes will only affect the initial KSampler.
Added Dynamic Thresholding back in.
If you are not familiar with how to use this node, you can just leave it disabled or read up on it here.
On my 3060:
Using Euler A the full workflow takes 170 seconds from start to finish.
Using Euler A CFG++ takes 162 seconds from start to finish. (due to less initial steps needed).
This will be a trial run of this workflow. Not 100% committed to this one yet.
v5b changes:
Edit: Updated the demo_settings version with the correct upscale settings on the 2nd KSampler. (0439 US Eastern Time 24 Feb 2025). It was set to 2 instead of 1 for the Upscale Factor.
Dropped ComfyUI-Adaptive-Guidance
Did not seem beneficial enough to keep in the workflow
To make full use of it, I would have to create a toggle for the normal node and the negative node version at a minimum.
I got better results when just using a standard guider node in many cases.
Added a switch from ComfyUI_Comfyroll_CustomNodes that allows the IMG2IMG group to be bypassed.
This node just changes the latent source going into the first KSampler.
You will still have to have an image placed in the Load Image node AFAIK, but you can try not having one there and see if it works.
Added a switch to allow for either latent upscaling or upscaling image with model below the 1st KSampler.
This affects what latent source feeds into the 2nd KSampler.
The 2nd KSampler by default is set to 1x Upscale, but you can adjust it to a higher number. I use it as a 2nd pass KSampler.
v5a changes:
Added ComfyUI-Adaptive-Guidance
This will only affect the 2 KSamplers at the start of the workflow.
Cannot be bypassed. You can remove this from the workflow if it is not for you. Just make sure to add a Guider node of some sort and connect it to the KSamplers or the workflow will be broken.
I discovered this node while looking up settings for "specialized" samplers.
Added ControlNet
This is connected to the 1st KSampler and can be bypassed.
You might have to put an image in the Load Image node even if you are not using the ControlNet Group.
Utilizes an All-in-One processor node from comfyui_controlnet_aux
The AIO processor node will download any missing processor files based on what you select on the node to use (at least for me it did).
You need to download a ControlNet model to use in the LoadControlNet Model node. I am using ControlNet-Union (promax version) which can be downloaded from here.
Re-added MaHiRo (ComfyUI v3.8+)
I used it in the demo images for v5a
If you are on an older version of comfy, you can just delete it from the workflow
It can be bypassed
v5 changes:
Dropped ComfyUI-APG_ImYourCFGNow from the workflow.
Ran into an issue when using FaceDetailer on certain setting that would break the workflow. Performed testing using non-custom nodes with FaceDetailer and added in custom nodes one by one. APG node was found to be the cause.
Dropped sd-dynamic-thresholding.
While testing the v5 workflow with/without this, I found the results to be better without it.
Dropped MaHiRo
It seemed to do the opposite of it's function for me.
Added sd-perturbed-attention
If you want to know what it does, the paper on it can be found here.
My experience so far with PAG seems to be that you should take the total of your CFG + PAG scale to equal what you would normally use for your CFG setting by itself.
Example: if you are using a sampler from CFG++SamplerSelect, then your total should equal 2. CFG 0.5 + scale 1.5 = 2.
So if you are using a normal sampler that would be something like CFG 6, then you could do something like CFG 3 + scale 3.
This is just from a non-technical standpoint and personal testing, so I could still be wrong.
Most of the groups in the workflow can be bypassed again since this workflow is not using the SET/GET nodes.
The tradeoff is oodles of noodles.
Less Noodles (a) version changes:
Re-added ComfyUI-ppm back in.
This adds in adjusted CFG++ samplers and some additional schedulers as well.
Thanks to @Catastrophy , the samplers for this now will save automatically to the metadata in Image Saver.
A toggle has been added for this in case people want to use the normal samplers.
Added APG I'm Your CFG Now
I wanted to give this a whirl since it was mentioned in Lobotomized Mix's description.
Added MaHiRo to the workflow. This is a test/beta node pre-installed on ComfyUI as of ComfyUI v0.3.8
I actually broke my comfy install while working on this version thanks to updating comfy. As a result, I had to do a clean install.
ComfyUI Manager did not install all of the Missing Custom Nodes in one session.
I had to run the Install Missing Custom Nodes function in two separate sessions of ComfyUI. (as in I ran the Install Missing Custom Nodes function, restarted comfy, ran it again, and restarted comfy again.)
My experience with the reinstall was that USDU did not want to import properly.
I had to clone it into a folder outside of my comfy install and then take that new USDU folder and paste it into my comfyui/custom nodes folder.
v4g Less Noodles Test changes:
Trying out the Set/Get Nodes from ComfyUI-KJNodes at the suggestion of @Catastrophy
As the version name suggests, these help clear up the workflow (visually).
Disclaimer: this is a test version and should work just like v4g except that bypassing Dynamic Thresholding and Detail Daemon break the flow.
If you don't like using Dynamic Thresholding and/or Detail Daemon, I would suggest sticking to v4g or you can adjust the workflow to your taste.
Removed the Alternative Watermark Removal portion from the workflow.
v4g changes:
Generation time from start to finish on a 3060 RTX:
With the default settings and most of the extra stuff turned off: 165 seconds (including checkpoint loading time).
With everything enabled: 200 seconds (including checkpoint loading time).
Added Image Saver nodes back in and dropped ComfyUI-ImageMetadataExtension
This was primarily due to compatibility issues with the SamplerCustom node.
Added ImpactWildardEncode nodes back in and dropped the split positive prompt nodes and Efficiency Nodes.
This is for compatibility with the Image Saver nodes and to keep the lora/embedding info present in the resources shown when posting to civitai.
The first KSampler has been swapped to SamplerCustom
This was a choice based on preference and wanting to be able to use ComfyUI-Detail-Daemon from the beginning of the process.
Added a 2nd USDU (ComfyUI_UltimateSDUpscale) node for a 2nd pass.
IMO, USDU seems to strip out some details on the 1st pass.
The 2nd pass seems to help add details back in.
v4f changes:
Generation time from start to finish on a 3060 RTX:
Everything but wildcards enabled took 133 seconds.
Bypassing all the extras and doing 2x upscale in the 2nd KSampler took 73 seconds.
Testing used euler_ancestral_cfg_pp (sampler) + karras (scheduler) on lobotomizedMix_v10 (v-pred model).
Adjusted the prompting order. It seems to give better results. YMMV.
Thanks to @TsunSayu for the suggestion
v4e changes:
Removed the 1st USDU node and replaced it with a KSampler (Sampler Custom)
This node upscales via the same upscaler model as USDU
Added a node to pick the upscale factor without you having to do the math. (Example: if your initial image is 1024x1024 and you set the Scale Factor to "2" this KSampler will upscale it to 2048x2048).
Node for selecting denoise for this node has been added as well to keep it separate from the USDU settings for denoise.
Watermark Removal
Split up the nodes to be just like how they are grouped in the workflow it originally came from.
Added the alternative version back in for those who want to use it. (Personally, I will stick to what I know works.)
Added Fast Groups Bypasser from rgthree
This allows you to toggle groups on/off in one place and also provides a way to go to any group by clicking on the arrow button.
Added a Detailer Group after the 1st Upscale.
This can be bypassed if you don't want to use it.
Dynamic Thresholding and Detail Daemon are set to bypass by default.
If you like using these (I do), then just re-enable them and adjust your parameters accordingly.
v4d changes:
Return of the old watermark flow.
was-node-suite-comfyui is required.
I use this watermark detector model, which can be found here.
Another detection model that is more aggressive can be found here.
Added a Seed Generator node to use the same seed across the workflow.
The only exception is the wildcard node. If you want to fix the seed on that node, you will have to do it manually. Having it connected to the seed generator node caused the same image to be recreated even when not set to "fixed". YMMV, but that was my experience with it.
The ModelSamplingDiscrete node has been added back in for folks using v-pred models.
You may or may not need it. It will be set to bypass by default.
Bookmarks have been reduced to 6.
They are set in a way that fits a 2560x1440 monitor, so if this does not work for you, you can delete them or ignore them.
v4c changes:
Added notes to pretty much everything on the workflow.
Trimmed down the Watermark Removal portion of the workflow thanks to a random person on the civitai discord providing a better one. No need for a detection model anymore. Yay!
This didn't work out. It would work sometimes, but other times it would destroy the picture. Re-added the old watermark removal in v4d.
Changed upscaling to use two USDU nodes. First to 1.5x, the second to roughly 2x.
Allegedly, this results in more detail (and I love details).
You can use a 2nd KSampler instead of the first USDU node, but that's up to you.
More re-arranging.
If you don't like spaghetti, install ComfyUI-Custom-Scripts. Go into your comfy settings, find "pysssss" on the menu. Click it. Find LinkRenderMode. Click on the dropdown menu in that section and pick "Straight" OR you can find a solution to hide them. I know that setting exists somewhere.
Added more bookmarks: up to 7 now.
The default settings in v4c upscaling is set to 1.5 and 2x (of the original image) in USDU. This has given me better results as far as quality goes, but can easily be toggled off if it's not for you.
v4b changes:
Added some QoL nodes.
Bookmarks added numbered 1 through 4 in places I thought were useful.
Just press 1, 2, 3, or 4 while not in a place where you input text/numbers to try them out.
Added a new (to me) Save Image node that does show the models/loras used when uploading to civitai.
v4a changes:
Added option to use Wildcards.
If you don't want to use wildcards, just click on the ImpactWildcardProcessor node and click CTRL+B to bypass OR make sure the upper text box of the node is empty. The better option is to use CTRL+B (or delete the node).
Other than that, some QoL changes and rearranging of the nodes.
v4 changes:
I am no longer using the Image Saver nodes as of v4. I tried to streamline the workflow and keep the features that I found the most useful. This workflow took inspiration from v3 and from the workflow that the author of NTR Mix had on some of their example images.
The current settings are to my preferences. You will need to adjust if you plan on using different samplers, etc.
The upscaling is set to 2x with half tile enabled in USDU. This has given me better results as far as quality goes, but can easily be toggled off if it's not for you.
Dropped ControlNet completely, it's not for me. (v3 and earlier has it)
With the current settings, I generate an image from start to finish in about 90 seconds on a 3060 RTX.
v4 uses Efficient Loader for the checkpoint/model and VAE. For loras it uses Lora Stacker. Both of those come from efficiency-nodes-comfyui.
Actually Simple:
Added a no-frills workflow for those who really just want to keep it simple, but want a little (very little) more than the default workflow. Check the "about" section off to the right for link to the two custom node packages required.
Older versions are no longer available since they were purged when I removed a lot of my old NSFW images.