Sign In

FLUX DEV Hi-Res Fix, Img2img, In- & Out- paint, Wildcards, LORAs, Ultimate SD Upscaler, ControlNet, Adetailer

682
25.1k
370
Type
Workflows
Stats
10,633
Reviews
Published
Aug 3, 2024
Base Model
Flux.1 D
Hash
AutoV2
C922546D4C
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Basics:

1) You MUST update ComfyUI from Update folder (using both .bat files).

2) You will need Node Manager and use it to install some custom nodes.

https://github.com/ltdrdata/ComfyUI-Manager

3) Download the .zip file from here and unzip it.

4) In ComfyUI, Load (or drag) the .json file to open the workflow.

NOTE: Dragging a picture onto your ComfyUI might load an older version of the workflow.

NOTE: The Prompt box has 2 boxes. Do NOT prompt into the clip_1 box, it follows prompts poorly and gives weird results.

GREEN Nodes: In these nodes you can freely change numbers to get what you want.

RED Nodes: These are my recommended settings. Feel free to experiment, though.

=============================================

FEATURES:

LORAs

\ComfyUI\models\loras
  • Make sure you either re-launch or refresh ComfyUI after adding any LORAs.

Wildcards

\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\wildcards
  • you can place them in subfolders too.

  • Make sure you either re-launch or refresh ComfyUI after adding any Wildcard.

WILDCARD NODE:

  • Populate mode allows you to prompt into the upper box with Wildcards.

  • Fixed mode allows you to prompt in the lower box without Wildcards.

LLM

\ComfyUI\modelsl\lm_gguf\
  • As an alternative to Wildcards, you can use an LLM AI model to generate a descriptive prompt from a shorter one that you type.

  • Link to the models I use:

https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF/tree/main

NOTE: The lighter the model is, the less system RAM it requires to be loaded.

  • Using the heaviest model (Mistral-7B-Instruct-v0.3.fp16) and FLUX-Dev fp16 can require up to 50 GB of system RAM.

  • On my RTX 3090, I was able to generate an image at 4MP (3296 x 2560) in about 700 secs with the heaviest options on.

  • A mid-road alternative is Mistral-7B-Instruct-v0.3.Q4_K_M

ControlNet (CN)

\ComfyUI\models\controlnet
  • Make sure you either re-launch or refresh ComfyUI after adding any CN model

  • FLUX: Since v2.2 should work with any model.

    https://huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro

  • Best Depth pre-processor: DepthAnythingv2

  • Best OpenPose pre-processor: DWPreprocessor

  • NOTE: You need to play with settings if you are getting weird results.

    • end_percent : Setting this between 0.2 to 0.5 usually does the trick.

    • strength : Sometimes, you might want to lower this if you are still getting weird results.

  • SD (XL or 1.5): For SD to FLUX workflows everything works.

ADetailer

\ComfyUI\models\ultralytics\bbox
\ComfyUI\models\ultralytics\segm

It adds a loooot! of spaghetti nodes. But it definitely makes things better.

Also, if you get a "no Dill" warning, either use bbox models, or "no Dill" segmentation models.

Ultimate SD Upscaler

\ComfyUI\models\upscale_models
  • Make sure you either re-launch or refresh ComfyUI after adding any upscaler model.

Comes in separate workflows.

Resolution Chooser:

You can choose how to input the resolution:

Just connect the box you want (only one at a time!) to latent_image

=============================================

Workflows:

img2img

Version 1.1: Added LORA support, as well as the ability to set image resolution.

NOTE: Using a person's LORA while using img2img will basically work as a face changer by attaching the LORA's face to the body that is being img2img-ed.

=============================================

High-Res Ultimate 2.x &

High-Res Ultimate 2.x LLM

2.3 Update: Fixed the controlnet auto-size image.

2.2 Update: Added compatibility with all FLUC ControlNet models.

This one does:

  1. 1st Pass: text2img. It can use wildcards / LLM, LORAS, and 2x ControlNet.

  2. 2nd Pass: HighRes Fix using Flux.

  3. 3rd & 4th Pass: Adetailer.

    You should look here on CivitAI for extra models for detection, such as nails, glasses, eyes, etc. It can use LORAs.

  4. 5th Pass: Ultimate SD Upscaler using a model of your choice.

For models, see the Suggested Resources section. Those are models I am currently using.

NOTE: From now on, I'll not update text2img and high-rez fix workflows except these ones. If you don't want to use something, just click on a node and press ctrl+b to Bypass it (it's a toggle).

NOTE on LLM: For now I have implemented blue coloring for Loader nodes, and Black coloring for Random nodes.

Also, apologies for the spaghetti, Adetailer really needs a lot of links. 😅

PRO TIP!

This workflow generates an image at each stage.

If you get a bad result at any step:

1) CANCEL the process from the queue.

2) Load in ComfyUI the last good image (drag and drop in the interface).

3) Change the options that resulted in things turning bad (in adetailer, for example, you might need either increase or decrease denoise).

4) Generate the image. The process resumes from the image you are using, NOT FROM THE BEGGING, so you don't waste time! (typically valid for Adetailer, as long as it's in the cache, at least).

In this example, I got an insufficient hand fix. So I stopped the process, re-loaded the last good image, increased denoised in the hand detailer node, and the process resumed from the last step (the hand fix, in this case ADetailer #2) without having to re-do everything.

SD TO FLUX Ultimate 2.x

2.3 Update: Fixed the controlnet auto-size image.

NOTE: This workflow requires SD ControlNets!

This one does:

  1. STEP 1: SD txt2img (SD1.5 or SDXL/PonyXL),

    ControlNet is at this stage, so you need to use the correct model (either SD1.5 or SDXL).

    It has Wildcards, and SD LORAs support.

  2. STEP 2: Flux High Res Fix

    Has FLUX LORAs support

  3. STEP 3-4: Adetailer (1-2 steps, I typically use Face and Hands)

    Important, the denoise value needed depends on the image. If you get a bad result at any stage, use the PRO TIP above.

    Has FLUX LORA support

  4. STEP 4: Ultimate SD Upscaler

=============================================

=============================================

Inpainting (with Sampling from another image)

1. Press "choose file to upload" and choose the image you want to inpaint.

2. Right-Click on the image and select "Open in Mask Editor". There, you'll be able to paint the mask.

3. When you are done with the inpainting, press "Save to Node".

4. Sampling: Now you can use elements from either the same or a different image to inpaint. The second image can be any image at all (but must be same size, or resized to that). Just load it in the second Load Image Node, and mask the part you want to be used as "source material" to inpaint your first image.

  • If you don't want to use this option, just disable the 2nd image node (ctrl+B)

=============================================

SDXL inpainting (with Sampling from another image)

Why this workflow here?

As of right now, I am not getting certain "image details" from LORAs, so this is a workaround.

  1. Just generate your images with FLUX, and then inpaint nipples and other stuff using SDXL or Pony or SD1.5 models to get your desired results.

  2. Why not adetailer? Simple, it's faster to generate many times only the details you want rather than regenerating the whole image each time and hope the details are right. Especially in ComfyUI.

Sampling: Now you can use elements from either the same or a different image to inpaint. The second image can be any image at all (but must be same size, or resized to that). Just load it in the second Load Image Node, and mask the part you want to be used as "source material" to inpaint your first image.

  • If you don't want to use this option, just disable the 2nd image node (ctrl+B)

=============================================

Outpainting:

Not 100% super duper, but you can get some decent results by extending by up to 256 pixels per side. You might need a bit of RNG though.

=============================================

=============================================

DEPRECATED: aka no longer supported (but still good)

High-Res fix 1.3 LITE (deprecated)

Basically, this workflow works in 2 stages:

  • text2img: Here I added a node that allows you to select Flux safe resolutions by clicking the dimensions button in the Green Node.

  • img2img: This regenerates the image at a higher resolution, the Green Node is where you select the upscaling factor, similar to A1111.

Version 1.1: Added a preview for each stage of the process.

Version 1.2: Added the dedicated Flux node for prompting. It includes the Guidance scale, but only use the T5XXL box (the lower one).

Version 1.3: Removed the secondary upscaling. It was added as a separate workflow.

=============================================

High-Res fix CN (Wildcards, Loras, ControlNet) (deprecated)

NOTE: Please use version 1.6+. Previous version were not working with LORAs properly.

2-Pass workflow:

  1. Flux txt2img

  2. Flux High Res Fix

This has everything High-Res fix 1.3 LITE has, plus Wildcards and LORAs support.

High-Res fix CN + Upscale (ControlNet, Wildcards, Loras, Ultimate SD Upscaler) (deprecated)

3-Pass workflow:

  1. Flux txt2img

  2. Flux img2img

  3. Ultimate SD Upscale

This workflow offers everything that High-Res fix does, but also has the Ultimate SD Upscaler (upscales by creating one tile at a time of the final image).

=============================================

text2img CN (ControlNet, Wildcards and Loras) (deprecated)

1-Pass workflow:

  1. Flux txt2img

text2img with Wildcards and LORA support. This one has no High-Res fix.

Now includes Resolution Chooser (see the high res version above for explanations).

text2img CN + Upscale (ControlNet, Wildcards, Loras, Ultimate SD Upscaler) (deprecated)

2-Pass workflow:

  1. Flux txt2img

  2. Ultimate SD Upscale

Just like text2img but also with Ultimate SD Upscaler.

text2img Adetailer (Wildcards, Loras, Adetailer) (deprecated)

Up to 3-pass workflow:

  1. Flux txt2img

  2. Adetailer #1

  3. Adetailer #2 (disable nodes with CTRL+B if not needed).

Each Adetailer pass supports its independent prompting and LORA.

=============================================

SDXL to FLUX CN (ControlNet, Wildcards and Loras)

Works with SDXL / PonyXL / SD1.5

2-Pass workflow:

  1. SD txt2img

  2. Flux High Res Fix

This allows you to generate images in any of your favorite style and automatically send them to img2img with FLUX. You might need to play with the denoise value to get best results.

SDXL to FLUX CN + Upscaler (ControlNet, Wildcards, Loras, Ultimate SD Upscaler)

Works with SDXL / PonyXL / SD1.5

3-Pass workflow:

  1. SD txt2img

  2. Flux High Res Fix

  3. Ultimate SD Upscaler

=============================================

Upscaling: Just like the "EXTRA" Tab in A1111 / Forge

\ComfyUI\models\upscale_models

Not such a great way to upscale images IMO, but I included it here if you want it.

  • Upscaling:

    The math is a bit weird. If you are using a x4 upscaler, that x4 will be applied automatically, so you need to multiply that number by the factor you want to get the final scaling factor.

    Example: 4 x 0.25 = 1 (no upscaling)

  • Just use any upscaler you want.