Sign In

Tenofas FLUX Modular Workflow - User guide

Tenofas FLUX Modular Workflow - User guide

FLUX – A guide to My Workflow (v.4.0)

Last update: September 13th, 2024 (updated to v.4.0)

FLUX came out on August the 1st, it was unexpected.
It was incredible. Everybody was on it, testing, writing about it, trying to understand how it works and how to make it more usable like SD 1.5 or SDXL.

At the beginning it seemed like it was impossible to have LoRA’s, Controlnets and make it lighter to run on older (or smaller) computers.
Then someone decided to try…

“Everyone knew it was impossible, until a fool who didn’t know came along and did it.” — Albert Einstein.

So, LoRA’s arrived, Controlnets too and also small and light checkpoints, in place of the 24Gb Unet files, that could be run on pc that did not have a Rtx 4090 with 64Gb Ram on it. Flux was available to everyone, and you could do anything with it!

I never used ComfyUI before, but on the 1st of August that was the only way to try FLUX. I had to install Comfy and started to learn how to use it. Now I would never go back!

I started to create a nice and easy workflow for FLUX, and then I added little things like LoRA’s manager, img2img prompt generator, upscaler, facedetailer… in few days it was a huge workflow. This post wants to be a short guide to the last version (v.4.0) of my workflow. At the end of this post you can find what files you need to run this workflow and the links for downloading them.

Please, before using the workflow, make sure you updated ComfyUI and all the Custom Nodes that are used in the workflow. This will avoid many possible errors.

Tenofas FLUX workflow v.4.0

My complete ComfyUI workflow looks like this:

You have several groups of nodes, that I would call Modules, with different colors that indicate different activities in the workflow. I will go into details later on. For now I want to give a quick overview of the workflow. The yellow nodes at the bottom left are just instructions and links to the files you need to run the workflow. Then you have the blue group, this is the core group of the workflow, where FLUX is loaded and you can set up its parameters. Then you will see the orange nodes, those are for the various Prompt methods. Following it, there is a red group, these are the switches and selectors for the various groups: here you will choose what kind of prompt you want to use (will write more on this later) and if you want to activate the various tools: Latent Noise Injector, Adetailer (if you are not generating portraits of human beings you can just turn this off as it won’t work and it will waste time and memory), Faceswap, LUT applier and the Upscaler (same thing: if you are not planning to Upscale your images keep this turned off). Next to it there is a bright-green group with a strange name: “Latent space magic dimension” this is where all the magic of AI image generation takes place. The dark-green nodes are the output nodes, those where the generated images will be previewed or compared.

On the top of the workflow, there are three orange groups. These are the prompts options if you don't want to use the txt2img prompt ("Input 1") in the core section of the workflow: "Input 2" is a img2img prompt generator that use Florence 2 model to convert the uploaded image to a text prompt (Input 2 on the prompt selector); "Input 3" is the LLM prompt generator, just write a short instruction or just a few keywords, and the LLM model will generate a colloquial-english prompt (Input 3 on the prompt selector), chained to this group there is a Portrait Master module to help you generate the keywords for LLM prompt generation if you want to create a portrait image ; "Input 4" allows you to "batch-prompt" many different prompts and generate them in batch with one click.
Right below the core modules of the workflow there is a Latent Noise Injection module, followed on the right by the new ADetailer module (completely reworked from previous version, now it will enhance eyes also, not just the face skin). Right above ADetailer you find a FaceSwap module that is based on ReActor nodes.

On the right, at the end of the workflow, you will find the Ultimate SD Upscaler module and the Apply LUT module.
Then at the bottom we have a few nodes will allow you to compare (with a slider) the original FLUX generated image (even with LoRA’s) with the one that has been modified by the other modules in the workflow. It’s very useful if you are looking for the most realistic portrait output.

Let’s see each part of the workflow in more detail.

The Core groups

My workflow uses the original model files released on August 1st by Black Forest Lab (the developer of FLUX), I use the Dev version, but you could also run the Schnell one. This group is the core of the workflow for generating FLUX images. With the new version (4.0) you can also use the GGUF Flux models that are more lightweight and come in different sizes (my suggestion: 8Q for top quality, 4Q for low VRAM GPU). Warning: if you are not going to download and use the FLUX GGUF model files, you have to remove the "Load FLUX GGUF Model" and the "Model Switch" nodes, and conect the "Load FLUX original Model" directly to the "FLUX LoRA's Loader" node.

You have the basic txt2img prompt node (orange one), then the blue group that will load the FLUX model and let you choose the settings.

You can set the image size in the node “Basic Image size”, the sizes are preset on the SDXL standard for better results. You can set the Flux Guidance (usually set at 3.5), the Sampler/Scheduler (there are a few suggestion you can try), the Steps and the Seed (it’s set to Random seed, but you can select Fixed if you want to). In the middle, at the top of the group, you can select the LoRA’s you want to use, set each LoRA’s strength, and turn on/off each one.

The FLUX LoRA's Loader allows you to retrive all the information you need from CIVITAI, just right-click on the LoRA you want more info about, and select "Show Info", a lot of information will be available to you.

Warning: the first time you start a generation the workflow needs to load the Unet and Clip files, so it will take a few minutes “stuck” at the first nodes. The speed depends on which Unet weight and Clip you are using and on the Vram/Ram your computer is running.

Right after the blue group you will find the new red group called "Workflow Control Center". Here you can switch on/off each module of the workflow. If you want just to generate Flux images quick and easy turn off all the modules here.

Then we have the bright green group, the Latent space where the image is generated. Right below there is a new "Image Overaly Text" node very useful for testing your image generation: you can write in the text box the description of the image you are generating and the text will be added on the image, leave the text box empty if you don't need this.

Last there is the Save Image node, now with metadata! This means that the image will contain all the generation metadata (prompt, scheduler/sampler, seed, steps...) and you can also add your own metadata (use the "Create Extra MetaData" node right below the Prompt node. I use this to add my data as creator of the image and the copyright).

Available Prompts

This workflow allows you to use different prompting methods. The first is the classic txt2img prompt: you just write a description of what your image should look like, and generate the image. Remember to use descriptive text, as FLUX clips understand very well human language and will give you better results. You can also try to write prompts in other languages, for example italian works fine. To select this prompt method you will have to set "off" all the other Prompt modules in the red Prompt Selector.

The second method is a img2img prompt generator using Florence 2 model. Just upload an image and the model will generate a text describing in detail the image you uploaded. There is a “Text Find and Replace” node that allow you to modify the output text as Florence 2 model will start describing the image with “The image is a…”, you can change the “find” and “replace” string as you like. Leave “more_detailed_caption” for a verbose prompt output. There are many Florence2 models you can choose, but my advise is to use the "MiaoshouAI/Florence-2-large-PromptGen-v1.5" model as it gives the best results.

I tried also to use JoyCaption img2img nodes, but unfortunately JoyCapition uses a model that is extremely large, and on a 16Gb Vram it will always go OutOfMemory, so I decided not to add JoyCaption to my workflow, at least for the moment.

The third method (for those of you who can’t write in good english or are just lazy like me) uses a LLM model (Groq is suggested as it’s free at the moment, but you could also use OpenAI). You will need an API key (free) and it has to be saved in Comfy by using a specific node (TaraApiKeySaver, in the image is the bright red node) that you can remove once you save the key, just open the node, insert the key, lauch a generation and you are done.

You must write your instructions in the bottom-left node, it can be a brief description of the image you want or just a few keywords. You can try to change the LLM settings (temperature can be set between 0.0 and 2.0, never above 2.0 or you will get errors). More about these LLM nodes on Github: Tara – ComfyUI Node for LLM Integration .

My suggestion for LLM settings: use mixtral-8x7b model on Groq, temperature 1.0, max tokens 512-1024. In my tests these gave the best results.

Above this group you will find the Portrait Master module.

This module allows you to generate a token-version of a txt2img prompt (as it was used with SD 1.5 and SDXL) and then feed this to the LLM prompt generator. You can set many details of the image here, like the description of you base character, his/her skin details, the makeup and the style and pose of the image.

A fourth method is the Batch prompt from txt file. It's just a txt2img batch prompts system that allows you to write how many prompts you want in a .txt file and upload it in the workflow and with a single click on Queue you will generate all the prompts in the .txt file as a batch.

Workflow Control Center

In the bright red group you have the Workflow Control Center, where you can control with the On/Off switches the modules you want use. So, if you turn all the switches off you will have a plain and simple FLUX model with LoRA’s txt2img generator. About the Prompt, in this version it will use the normal txt2img (Input 1) prompt if all the other prompt methods are turned off, otherwise it will use the lower input number of the methods turned on (so, for example, if you turn On Input 2 and Input 4, it will use only Input 2, the img2img prompt).

Latent Noise Injection module

This module is new, and it is a very "powerful" way to upscale the image (but only up to 2x, more than that would ruin the image) and to add a lot of detail to it. But it could also apply some modification to the image, so use it carefully. It takes the image you generated in the Core module and send it back to Latent space, upscaling it and adding some noise on it on two passes, and then it generate an enhanced image.

The settings are many, there are some suggestion in the yellow notes node, use them a starting point, then play with the settings. There is not valid settings for all images, depends on the subject: a portrait will have completely different settings from a landscape image or from an anime image. In the module you can even apply a light sharpening to the image.

ADetailer module

This is the new version of FaceDetailer from my v.3.3. It check the image FLUX generated, recognize the face and the eyes in it and applies more details to both. It's not based on SD 1.5 checkpoint and LoRA, it is completely working on FLUX model now. Obviously this module will work only on portrait images, where it can recognize a human face. Otherwise you can turn it Off.

Compare the various outputs

In these nodes you can compare (with a slider) the various images that the workflow generates one with another. It is a very useful tool to see the differences that the modules apply on their output.

The Upscaler

The Ultimate SD upscaler is completely redesigned. It will upscale only (and automatically) only the last module you turned on. So if you have everything off and turn on only the Upscaler it will upscale the core output, if you turn on the ADetailer the Upscaler will work on the ADetailer output only. So choose with attention your modules and what kind of workflow you want to use for Upscale.

Here you can set the parameters for the Upscale. You can play with the settings, but beware, it could make the upscale process extremely long. Upscale by 2 means the image will double width-height sizes, so a 1024×1024 image will be upscaled to 2048×2048. You can change the Steps, the tile width/height (but it’s better to leave it close to the original image size) and the other settings if you want. There are many Upscaler models you can choose: https://openmodeldb.info.

The FaceSwap module

This module uses ReActor nodes to manage a simple faceswap. You just upload an image (a high res portait) of the source face you want to swap in the Flux generated image, and start the generation. You can compare the original image with the resulting image with a slider.

The Apply LUT module

You can use LUTs (Look UP Tables) to give your image an analogic-film look. Just downlad the LUT you need (here are some free good ones). The LUT files must be saved in the following folder:

../ComfyUI/custom_nodes/ComfyUI_essentials/luts/

Where to download the workflow from

I have my workflow available for download from two websites:

1. CIVITAI (you need to be logged to Civitai to download it)

2. OpenArt.ai

Model, Lora and other files you will need

This workflow was designed around the original FLUX model released by the Black Forest Lab team. You will need the following files to use the workflow:

1) UNET – Dev or Schnell version (each one is around 24Gb, Dev gives better results, Schnell is the “turbo” version). These files must be saved in /models/unet/ folder.
Dev download Link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors?download=true
Schnell
download Link:
https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/flux1-schnell.safetensors?download=true

2) GGUF - if you want to use a FLUX GGUF model you have just to choose its "weight", Q8 and Q4 are the two I tested and give good results. These files must be saved in /models/unet/ folder. You can find them here: https://huggingface.co/city96/FLUX.1-dev-gguf/tree/main

3) Clip – you need these clip files (use fp16 for better results, fp8 if you have low Vram/Ram). These files must be saved in /models/clip/ folder.

t5xxl_fp16:
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors?download=true

t5xxl_fp8:
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors?download=true

clip_l:
https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors?download=true

4) VAE – last but not least the VAE file (must be saved in /models/vae/ folder):
https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors?download=true

ADetailer uses some specific files to recognize face and eyes in the image.
These are the files you need for the mask detector:

1) sam_vit_b_01ec64.pth (goes in folder /models/ultralytics/bbox/ )
https://huggingface.co/datasets/Gourieff/ReActor/blob/main/models/sams/sam_vit_b_01ec64.pth

2) face_yolov8n_v2.pt (goes in folder /models/ultralytics/bbox/ )
https://huggingface.co/Bingsu/adetailer/blob/main/face_yolov8n_v2.pt

3) eyeful_v2-paired.pt (goes in folder /models/ultralytics/bbox/ )
https://civitai.com/models/178518/eyeful-or-robust-eye-detection-for-adetailer-comfyui

Last, but not least, for the Upscaler I use the 4x_NMKD-Siax_200k.pth model (goes in folder /models/upscale_models/) that you can download here: https://huggingface.co/gemasai/4x_NMKD-Siax_200k/tree/main or https://civitai.com/models/147641/nmkd-siax-cx

But there are many other upscale models you can use.

I hope you will enjoy this workflow. Leave a message if you have any question, request or hint. Thanks!

Tenofas

79

Comments