Type | Workflows |
Stats | 19,809 |
Reviews | (1,028) |
Published | Aug 3, 2024 |
Base Model | |
Hash | AutoV2 5F6E74B31E |
Basics:
1) You MUST update ComfyUI from Update folder (using both .bat files).
2) You will need Node Manager and use it to install some custom nodes.
https://github.com/ltdrdata/ComfyUI-Manager
3) Choose the workflow you need from the top of the Civitai page, and Download the .zip file and unzip it here:
ComfyUI\user\default\workflows
4) In ComfyUI, Load (or drag) the .json file to open the workflow.
NOTE: Using a picture onto your ComfyUI might load an older version of the workflow. Use the json files instead.
NOTE: The Prompt box has 2 boxes. Do NOT prompt into the clip_1 box, it follows prompts poorly and gives weird results.
NODE COLORING:
GREEN Nodes: In these nodes you can freely change numbers to get what you want.
RED Nodes: These are my recommended settings. Feel free to experiment, though.
Blue Nodes: These are loader nodes. Ensure you load your files here (just click on a filename and select one from your options). If you don't see any option, it's because you didn't place any file in the correct path or, if you did, you might need to restart ComfyUI.
=============================================
FEATURES:
LORAs
\ComfyUI\models\loras
Make sure you either re-launch or refresh ComfyUI after adding any LORAs while it's running.
Wildcards
\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\wildcards
you can place them in subfolders too.
Make sure you either re-launch or refresh ComfyUI after adding any Wildcard while it's running.
WILDCARD NODE:
Populate mode allows you to prompt into the upper box with Wildcards.
Fixed mode allows you to prompt in the lower box without Wildcards.
LLM
\ComfyUI\models\lm_gguf\
As an alternative to the Wildcards node, you can use an LLM AI model to generate a descriptive prompt from a shorter one that you type.
Link to the models I use:
https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/tree/main
NOTE: The lighter the model is, the less system RAM it requires to be loaded.
Using the heaviest model (Mistral-7B-Instruct-v0.3.fp16) and FLUX-Dev fp16Â can require up to 50 GBÂ of system RAM.
On my RTX 3090, I was able to generate an image at 4MP (3296 x 2560) in about 700 secs with the heaviest options on.
A mid-road alternative is Mistral-7B-Instruct-v0.3.Q4_K_M
ControlNet (CN)
\ComfyUI\models\controlnet
Make sure you either re-launch or refresh ComfyUI after adding any CN model while it's running.
FLUX: Since v2.2 should work with any model.
https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro
Best Depth pre-processor: DepthAnythingv2
Best OpenPose pre-processor: DWPreprocessor
NOTE: You need to play with settings if you are getting weird results.
end_percent : Setting this between 0.2 to 0.5 usually does the trick.
strength : Sometimes, you might want to lower this if you are still getting weird results.
SD (XL or 1.5): For SD to FLUX workflows everything works.
ADetailer
\ComfyUI\models\ultralytics\bbox
\ComfyUI\models\ultralytics\segm
If you get a "no Dill" warning, either use a bbox model, or "no Dill" segmentation models that you can find here on Civitai.
Ultimate SD Upscaler
\ComfyUI\models\upscale_models
Make sure you either re-launch or refresh ComfyUI after adding any model while it's running.
Comes in separate workflows.
Ultimate SD Upscaler takes a lot of system resources! It will generate 4+ tiles that will eventually be merged to create the final image.
Flux Redux (IP Adapter)
Requires two files
flux1-redux-dev:
https://huggingface.co/black-forest-labs/FLUX.1-Redux-dev/tree/main
ComfyUI\models\style_models
siglicp_vision_patch14_384
https://huggingface.co/Comfy-Org/sigclip_vision_384/blob/main/sigclip_vision_patch14_384.safetensors
ComfyUI/models/clip_vision
Make sure you either re-launch or refresh ComfyUI after adding any model.
Comes in separate workflows.
=============================================
Workflows:
F.1 img2img
Version 1.1: Added LORA support, as well as the ability to set image resolution.
NOTE: Using a person's LORA while using img2img will basically work as a face changer by attaching the LORA's face to the body that is being img2img-ed.
=============================================
F.1 Style Changer (RF Inversion)
This worflow allows you to input an image, and change its style with FLUX as well as with LORAs.
Then, it takes the output and passes it through Ultimate SD Upscaler and finally to Adetailer to improve hands, etc.
NOTE: Keep the Prompts empty. Use LORAs for styling instead. This work best.
NOTE: You can use a Character LORA + face Adetailer to switch faces at this stage.
=============================================
F.1 text2img 3.0
F.1 text2img 3.0 LLM
This one does:
text2img. It can use wildcards (or LLM), LORAs, and 2x ControlNet.
High Res Fix using Flux.
Ultimate SD Upscaler
Adetailer (up to 3) with LORA support.
You should look here on CivitAI for extra models for detection, such as nails, glasses, eyes, etc. It can use LORAs.
For models, see the Suggested Resources section. Those are models I am currently using.
NOTE: If you don't want to use some node or feature, just click on a node (or box select multiple while holding Ctrl) and press ctrl+b to Bypass it (it's a toggle).
NOTE on LLM: For now I have implemented blue coloring for Loader nodes, and Black coloring for Random nodes.
Also, apologies for the spaghetti, Adetailer really needs a lot of links. 😅
PRO TIP!
This workflow generates an image at each stage.
If you get a bad result at any step:
1) CANCEL the process from the queue.
2) Load in ComfyUI the last good image (drag and drop in the interface).
3) Change the options that resulted in things turning bad (in adetailer, for example, you might need either increase or decrease denoise).
4) Generate the image. The process resumes from the image you are using, NOT FROM THE BEGGING, so you don't waste time! (typically valid for Adetailer, as long as it's in the cache, at least).
In this example, I got an insufficient hand fix. So I stopped the process, re-loaded the last good image, increased denoised in the hand detailer node, and the process resumed from the last step (the hand fix, in this case ADetailer #2) without having to re-do everything.
=============================================
SD TO FLUX Ultimate 2.x
2.3 Update: Fixed the controlnet auto-size image.
NOTE: This workflow requires SD ControlNets (not flux)!
This one does:
STEP 1: SD txt2img (SD1.5 or SDXL/PonyXL),
ControlNet is at this stage, so you need to use the correct model (either SD1.5 or SDXL).
It has Wildcards, and SD LORAs support.
STEP 2: Flux High Res Fix
Has FLUX LORAs support
STEP 3-4: Adetailer (1-2 steps, I typically use Face and Hands)
Important, the denoise value needed depends on the image. If you get a bad result at any stage, use the PRO TIP above.
Has FLUX LORA support
STEP 4: Ultimate SD Upscaler
=============================================
F.1 Inpainting (with Sampling from another image)
1. Press "choose file to upload" and choose the image you want to inpaint.
2. Right-Click on the image and select "Open in Mask Editor". There, you'll be able to paint the mask.
3. When you are done with the inpainting, press "Save to Node".
4. Sampling: Now you can use elements from either the same or a different image to inpaint. The second image can be any image at all (but must be same size, or resized to that). Just load it in the second Load Image Node, and mask the part you want to be used as "source material" to inpaint your first image.
If you don't want to use this option, just disable the 2nd image node (ctrl+B)
=============================================
SDXL inpainting (with Sampling from another image)
Why this workflow here?
As of right now, I am not getting certain "image details" from LORAs, so this is a workaround.
Just generate your images with FLUX, and then inpaint nipples and other stuff using SDXL or Pony or SD1.5 models to get your desired results.
Why not adetailer? Simple, it's faster to generate many times only the details you want rather than regenerating the whole image each time and hope the details are right. Especially in ComfyUI.
Sampling: Now you can use elements from either the same or a different image to inpaint. The second image can be any image at all (but must be same size, or resized to that). Just load it in the second Load Image Node, and mask the part you want to be used as "source material" to inpaint your first image.
If you don't want to use this option, just disable the 2nd image node (ctrl+B)
=============================================
Outpainting:
Not 100% super duper, but you can get some decent results by extending by up to 256 pixels per side. You might need a bit of RNG though.
=============================================
=============================================
DEPRECATED: aka no longer supported
High-Res fix 1.3 LITE (deprecated)
Basically, this workflow works in 2 stages:
text2img: Here I added a node that allows you to select Flux safe resolutions by clicking the dimensions button in the Green Node.
img2img: This regenerates the image at a higher resolution, the Green Node is where you select the upscaling factor, similar to A1111.
Version 1.1: Added a preview for each stage of the process.
Version 1.2: Added the dedicated Flux node for prompting. It includes the Guidance scale, but only use the T5XXL box (the lower one).
Version 1.3: Removed the secondary upscaling. It was added as a separate workflow.
=============================================
High-Res fix CN (Wildcards, Loras, ControlNet) (deprecated)
NOTE: Please use version 1.6+. Previous version were not working with LORAs properly.
2-Pass workflow:
Flux txt2img
Flux High Res Fix
This has everything High-Res fix 1.3 LITE has, plus Wildcards and LORAs support.
High-Res fix CN + Upscale (ControlNet, Wildcards, Loras, Ultimate SD Upscaler) (deprecated)
3-Pass workflow:
Flux txt2img
Flux img2img
Ultimate SD Upscale
This workflow offers everything that High-Res fix does, but also has the Ultimate SD Upscaler (upscales by creating one tile at a time of the final image).
=============================================
text2img CN (ControlNet, Wildcards and Loras) (deprecated)
1-Pass workflow:
Flux txt2img
text2img with Wildcards and LORA support. This one has no High-Res fix.
Now includes Resolution Chooser (see the high res version above for explanations).
text2img CN + Upscale (ControlNet, Wildcards, Loras, Ultimate SD Upscaler) (deprecated)
2-Pass workflow:
Flux txt2img
Ultimate SD Upscale
Just like text2img but also with Ultimate SD Upscaler.
text2img Adetailer (Wildcards, Loras, Adetailer) (deprecated)
Up to 3-pass workflow:
Flux txt2img
Adetailer #1
Adetailer #2 (disable nodes with CTRL+B if not needed).
Each Adetailer pass supports its independent prompting and LORA.
=============================================
SDXL to FLUX CN (ControlNet, Wildcards and Loras)
Works with SDXL / PonyXL / SD1.5
2-Pass workflow:
SD txt2img
Flux High Res Fix
This allows you to generate images in any of your favorite style and automatically send them to img2img with FLUX. You might need to play with the denoise value to get best results.
SDXL to FLUX CN + Upscaler (ControlNet, Wildcards, Loras, Ultimate SD Upscaler)
Works with SDXL / PonyXL / SD1.5
3-Pass workflow:
SD txt2img
Flux High Res Fix
Ultimate SD Upscaler
=============================================
Upscaling: Just like the "EXTRA" Tab in A1111 / Forge
\ComfyUI\models\upscale_models
Not such a great way to upscale images IMO, but I included it here if you want it.
Upscaling:
The math is a bit weird. If you are using a x4 upscaler, that x4 will be applied automatically, so you need to multiply that number by the factor you want to get the final scaling factor.
Example: 4 x 0.25 = 1 (no upscaling)
Just use any upscaler you want.