Verified: 23 days ago
Other
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
Introduction
💪 This is my attempt at a flexible and extensible workflow framework in variants of Flux and SDXL. Additional models also work after minor revisions*, with instructions provided in the workflow:
[Flux] Flux schnell and dev; Chroma [Schnell dedistilled]; OpenFlux [Schnell dedistilled]; Other dedistilled; and SVDQuants —Nunchaku
[SDXL] Illustrious, NoobAI, and Pony [SDXL finetunes]; SD 1.5; SD 3.5; HiDream; Hunyuan-DiT (with embedded clip); Kolors; OmniGen2 (native); and PixArt-Σ (sigma)
[* = models confirmed to run at least a default text-to-image or image-to-image workflow; not all functions, such as ControlNets, work for every model]
Many customizable pathways are possible to create particular recipes 🥣 from the available components, without unnecessary obfuscation (e.g., noodle convolution, stacking nodes over others, etc.) and arguably capable of rendering results of similar quality to more complicated specialized workflows.
The workflow was developed and tested on the following system:
Operating system: Linux Mint 21.3 Cinnamon with 62 GB RAM
Processor: 11th Gen Intel© Core™ i9-11900 @ 2.50GHz × 8
Graphics card: NVIDIA GeForce RTX 3060 with 12 GB VRAM
Browser: Google Chrome
The following provides some sample local render times for the default settings, after model loading and using sage attention (--use-sage-attention):
SDXL —light ⁘ text-to-image ⁘ 896x1152 = 19–28 sec.
Flux Dev —light ⁘ text-to-image ⁘ 896x1152 = 160–195 sec. (2½–3½ min.)
HiDream Fast (modded SDXL —mini workflow) ⁘ text-to-image ⁘ 896x1152 = 275–330 sec. (4½–5½ min.)
Wan video (incl. in Flux and SDXL workflows) ⁘ acceleration settings enabled (e.g., FusionX and self-forcing LoRAs) ⁘ I2V ⁘ 49 frames (3 sec.) ⁘ 480x624 ⁘ 4+2 steps = 300–450 sec. (3–5½ min.)
Please report bugs 🪲 or errors 🚫, as well as successes 🤞 and requests/suggestions 📝. Post and share your SFW creations!!! I spent a lot of time working on this project (((for no 💰))), so I hope others make good use of it and find it helpful.
🌌 Flux v7
The 💪 Flexi-Workflow for Flux is highlighted by...
Flux Tools 🛠️:
IPAdapter
Regional control
Face swapping tools:
LanPainting for alternative inpainting
Captioners:
Wildcards (Easy-Use)
Background removers:
XY plots (Easy-Use)
Upscalers:
Basic upscaling model(s)
Tiled diffusion*
Media generators:
3D viewing (Anaglyph Tool)
Talking avatar: Float Animator or SONIC
[* = Kontext, OmniGen2, and similar context-aware models are likely to be the new go-to editing tools, but the full workflow still includes an array of useful tools that may be considered now obsolete by some, but allow for fuller explorations of even older models with often faster rendering times; just delete any accessory groups you don't need and/or run the Lite 🪶 edition] [* = tiled diffusion requires minor revisions to the workflow to connect the necessary included nodes]
Flux ⁘ Compatibility
[* = requires minor revisions to the workflow using custom nodes]
Flux ⁘ Scaled Down Editions
Scaled down editions offer a consistent feel of the full version, without all of the fluff (e.g., extra face swappers, upscalers, etc.).
Core 🦴: Provides core functionality. Works great as a template base to build onto in your own way or convert to .
Lite 🪶: Provides pared down functionality to reduce package requirements, but still includes Flux Tools (with Kontext!!!) and a couple of simpler upscaler options. Works great as a lighter standalone workflow for exploring ideas.
Mini 🦐: Includes bare minimum functionality with greatly reduced package requirements and no "dangling" (i.e., unconnected) nodes*. Works great to understand the basics of the workflow, especially if you are new to ComfyUI, or as a simpler template base to revise and build onto in your own way.
[* = ...except in the Rendering Engine group until a bug with the DualCFGGuider gets fixed]
Flux ⁘ What's New?
[v7.0] Removed requirements for the following custom nodes packages, after full removal or equivalent replacement of nodes: DepthFlow, essentials (no longer maintained), Fill-Nodes, Fluxtapoz, GGUF, Kokoro, and WanVideoWrapper.
[v7.0] Removed DepthFlow group due to it taking up a lot of space and installing a bunch of older Python libraries for basically outdated novelty use.
[v7.0] Removed RF Inversion group due to being somewhat complicated and superseded by newer tools (e.g., Flux Kontext).
[v7.0] Replaced GGUF nodes with equivalents from an alternative gguf package in the Loaders group.
[v7.0] Replaced Kokoro text-to-speech audio nodes with equivalents from OpenAI FM in the Talking Avatar group.
[v7.0] Added model compatibility instructions for native implementation of OmniGen2.
[v7.0] Added Flux Tool group for Kontext (local).
[v7.0] Added Normalized Attention Guidance (NAG) node, which allows for use of negative prompts, even for distilled models.
[v7.0] Added LanPaint group.
[v7.0] Reworked the video generation group (and renamed to "Media Generation"), including removal of DepthFlow and making the 3D viewing group a subgroup.
[v7.0] Reworked and simplified latent (switch) group and Flux tools Fill group.
[v7.0] Reworked and simplified the Wan video group and loaders (again), which now uses FusionX and self-forcing LoRAs by default. Rendering times are much faster!
[v7.0] Reworked the basic upscaler group so that the add noise option is applied after upscaling.
[v7.0] Reworked the background removal group, including replacing one of the existing tools (from the no longer maintained essentials package) with a bilateral reference framework (BiRefNet) option offering a choice of variants.
[v6.0] Removed requirements for the following custom nodes packages, after full removal or equivalent replacement of nodes: Bleh, Custom-Scripts, EasyControl, Florence2, InstantCharacter, memory_cleanup, multigpu, patches II, SideBySide_StereoScope, utils, Various nodes, and WAS Node Suite. [v6.0] Removed stand-alone torch compile nodes in Loaders group. [v6.0] Removed EasyControl and InstantCharacter groups; similar abilities exist through Gemini and the (upcoming?) Flux.1 Kontext [dev] model. [v6.0] Replaced the save image node with the equivalent from comfyui_image_metadata_extension. [v6.0] Replaced SideBySide_StereoScope nodes with equivalents from anaglyph. [v6.0] Replaced multigpu loader nodes with their equivalents from GGUF (city96) — the multigpu gguf loader nodes kept throwing OOM errors when attempting to generate longer Wan videos, so I just replaced all of them. [v6.0] Replaced memory cleanup nodes with their equivalents from easy-use. [v6.0] Added model compatibility instructions: only Flux and SDXL workflows will be maintained going forward since they can run other models after only minor revisions needed. [v6.0] Added Float Animator as an additional talking avatar processor. [v6.0] Added TaylorSeer node to the Loaders group (01a). [v6.0] Added IPAdapter and Wildcards groups. [v6.0] Reworked inpainting with crop-and-stitch, including improved helper instructions. [v6.0] Reworked the Wan video group and loaders. [v6.0] Cleaned up workflow significantly, particularly the Captioner and BG Removal groups.
[v5.4] Improved handling of masks in Redux. [v5.4] Added additional resize node to Load image input (02a) to better facilitate tiled diffusion (upscaling) of larger images. [v5.3] Added tiled diffusion node for an additional upscaling option. [v5.3] Added SD3 model sampling node, which may improve rendering even in Flux. [v5.3] Removed requirement for Crystools nodes. [v5.2] Added latent combo option and cleaned up latent group. [v5.1] Removed Golden Noise node. [v5.0] Greatly simplified the Wan video group, now limited to image-to-video and first-last-frame-to-video options; a more robust separate Wan workflow is in development. [v5.0] Added InstantCharacter 🚧, but the current implementation is experimental and exceeding my available VRAM to even run. [v5.0] Reconfigured second sampling, which should offer more flexibility. [v5.0] Replaced image output comparison node, which works better and offers more functions. [v5.0] Redux now uses ReduxFineTune node, for more intuitive controls. [v5.0] Stereoscope now offers rendering of videos.
[v4] Added Gemini AI, facial expression editor, and Thera upscaler. [v4] Replaced Hunyuan video with Wan 2.1 video: employs (mostly) native nodes · text-to-video, image-to-video (default), and video-to-video (using ControlNet) options · ControlNet Fun and LoRA models implemented; VACE not yet available · simple upscaling and interpolation [v4] Replaced OmniGen with EasyControl 🚧, but the current implementation is experimental and exceeding my available VRAM to even run. [v4] MutiGPU loaders now default, except for Wan where they seemed to be a source of instabilities. [v4] Overhauled ControlNets + groups: simplified Redux · restructured basic ControlNets to allow three different models concurrently · regional control that respects different LoRAs [v4] Cleaned up and improved workflow: more color-coding of nodes · better organization and sorting of bookmarks · added global seed node · added simple latent operations (between samplers) · fixed default masking bug · upgraded inpainting crop-and-stitch · added model switch, for easier implementation of specialized recipes
Flux ⁘ Known Issues
[v7] If you run into errors in your boot log, restart ComfyUI with --disable-all-custom-nodes to first diagnosis any problems with your core installation and system setup. Then, restart ComfyUI normally and try to fix custom node packages through the Manager if possible. Running pip check may help diagnose any final Python dependency conflicts, with a small number possibly persisting (e.g., mediapipe, numpy, etc.), but should not affect the workflow; just try to work these down to a reasonably small number. Example from my setup, with only two unresolved:
pip check
aiortc 1.9.0 has requirement av<13.0.0,>=9.0.0, but you have av 14.4.0.
thinc 8.3.6 has requirement numpy<3.0.0,>=2.0.0, but you have numpy 1.26.4.
[v7] There is currently a bug affecting the DualCFGGuider and PerpNegGuider nodes, which may triple rendering times! The workflow was built with the DualCFGGuider node as a key component in the Rendering Engine group, but was replaced just prior to release with the nearly equivalent CFGGuider node. This should tangibly have no effect in the vast majority of use cases, except that setting a separate CFG for negative conditioning is currently disabled; but regular Flux doesn't use CFG anyway!
[v7] The text-to-speech audio nodes (PC-ding-dong) don't seem to work most of the time.
Flux ⁘ Quick Start
Install or update ComfyUI to the very latest version. Follow your favorite YouTube installation video, if needed.
Install ComfyUI Manager.
Download the following models or equivalents. Follow the Quickstart Guide to Flux.1, if needed.
FLUX.1-Turbo-Alpha LoRA —optional, but highly recommended
Open the Flexi-Workflow in ComfyUI. You may want to start with one of the reduced editions (e.g., mini 🦐), especially if you are new to ComfyUI.
Use the Manager to Install Missing Custom Nodes:
Fresh installation: It is recommended to install just a few custom node packages at a time until you get through all of them. You may need to set security_level = normal- (notice the dash/minus!) in the config.ini file to download some custom nodes.
Updating from a previous workflow version: It is good practice to first backup your Python virtual environment configuration, such as "conda env export > environment.yml". Custom node requirements are likely to have changed significantly, so disable all custom node packages, except for Manager itself. Then, re-enable or install missing custom nodes as required.
Tip to avoid downloading unneeded packages: Delete any unconnected nodes and/or accessory groups (e.g., face swappers, etc.) showing missing nodes if you know you won't need their functions.
Restart ComfyUI.
Load models (01a) and LoRAs (03c) according to your folder structure.
Run the default text-to-image recipe 🥣.
Enjoy your generated image creations! 😎
BONUS TIP: Drag-and-drop your rendered image back onto the ComfyUI canvas to make additional revisions. This ensures you always have a good default workflow as fallback. 🏅
Flux ⁘ Additional Recommended Installations
For the intended component functionalities, install Flux Tools: Fill, Canny & Depth, Redux, and/or KONTEXT. The Redux model also requires sigclip_vision_384.
Other ControlNets, such as X-Lab's Canny and Depth, Shakker-Labs's Union Pro, TheMistoAI's Lineart/Sketch, and/or jasperai's Upscaler and other models, are also (theoretically) supported, although results may vary. (The bdsqlsz and kohya models appear to require image dimensions be divisible by 32, which is not guaranteed in the workflow.) Feel free to browse for others.
While you should be prompted to install the necessary custom nodes (~45) via the ComfyUI Manager, I'm listing them here for your reference: A8R8 ComfyUI Nodes (ramyma), AdvancedLivePortrait (PowerHouseMan), anaglyphTool (Cryptyox), comfy-plasma (Jordach), ComfyMath (evanspearman)*, Comfyroll Studio (Suzie1)*, ControlAltAI_Nodes (ControlAltAI)*, controlnet_aux (Fannovel16)*, Detail-Daemon (Jonseed)*, Easy-Use (yolain)*, Float_Animator (KERRY-YUAN), Node (Light_x02)*, gguf (gguf - calcuis)*, GIMM-VFI (kijai), IF_Gemini (impactframes)*, Image-Filters (spacepxl), comfyui_image_metadata_extension (edelvarden)*, Impact Pack (Dr.Lt.Data)*, Inpaint-CropAndStitch (lquesada)*, InvSR (yuvraj108c), IPAdapter_plus (Matteo), iTools (Makadi)*, KJNodes (Kijai)*, LanPaint (scraed), LayerStyle (chflame163)*, LayerStyle_Advance (chflame163), LG_Relight (laogou666), Manager (Dr.Lt.Data)*, OpenAI-FM (fairy-root), PC-ding-dong (lgldl), PuLID_Flux_ll (lldacing), ReduxFineTune (AILab)*, rgthree-comfy (rgthree)*, RMBG (AILab), sd-perturbed-attention (Pamparamm), Sonic (smthemex), SUPIR (Kijai), TaylorSeer (philipy1219), Thera (yuvraj108c)*, TiledDiffusion (shiimizu)*, UltimateSDUpscale (ssit), VideoHelperSuite (Kosinkadink), VideoUpscale_WithModel (ShmuelRonen), wanBlockswap (orssorbit), WD14-Tagger (pythongosssss), and ZenID (Vuong Minh). Additional recommended add-ons include: Crystools (Crystian), Custom-Scripts (pythongosssss), KikoStats (ComfyAssets - kiko9), LoRA manager (willmiao), N-Sidebar (Nuked88), PNG Info Sidebar (KLL535), and Scheduled Task (dseditor). [* = lite edition] [🚧 = experimental]
Recommended upscalers/refiners include 1xSkinContrast-SuperUltraCompact, Swin2SR, 4xPurePhoto-RealPLSKR, Swin2SR Upscaler (x2 and x4), UltraSharpV2, or browse the OpenModelDB.
Accessory models (e.g., Florence 2) should download automatically when first run; so just be aware of any delays and check the terminal window to monitor progress.
Flux ⁘ Navigation
The workflow is structured for flexibility. With just a few adjustments, it can flip from text-to-image to image-to-image to inpainting or application of Flux Tools 🛠️. Additional unconnected nodes have been included to provide options and ideas for even more adjustments, such as linking in nodes for increasing details. (The workflow does not employ Anything Everywhere, so if a node connection looks empty, it really is empty.)
In the Switchboard, flip the yes|no 🔵 toggles to activate or deactivate groups and the jump arrows ➡️ to quickly move to particular groups for checking and making adjustments to the settings/switches.
🛑 DO NOT RUN THE WORKFLOW WITH ALL SWITCHES FLIPPED TO "YES"! 🛑
There are also bookmarks 🔖 to help you navigate quickly.
In the rgthree settings, it is also recommended to show fast toggles in group headers for muting.
In the Lite Graph section of the settings, enable the fast-zoom shortcut and set the zoom speed to around 1.5–1.75. The workflow was built with a snap to grid size of 20.
Most of the workflow is unpinned 📌, so grab any empty space with your mouse (while pressing the control key) to navigate around. You are welcome to pin 📌 anything to prevent accidentally moving groups or nodes.
Flux ⁘ Recipes
This is the default text-to-image recipe 🥣 and should be run first to make sure you have the basics configured correctly.
💪 ⁘ Toggle to "yes" 01a; 02b; 03 all; 04; and 05
03a ⁘ Latent switch = 1 (empty)
03b ⁘ Conditioning switch = 1 (no ControlNets +)
03e ⁘ Denoise = 1; Guidance = 1.2–5; Steps 20–30, or 8–12 w/ Turbo LoRA
05 ⁘ Image switch = 2 (save generated image)
Once you have the workflow running, it is recommended to drag-and-drop rendered images back onto the ComfyUI canvas to make additional workflow adjustments. This ensures you always have a good default workflow as fallback.
Reference the Start Here group to find additional workflow recipes 🥣.
🌄 SDXL v7
The 💪 Flexi-Workflow for SDXL is based on the Flux variant and includes almost all of the same components and functionality. Scaled down core 🦴 , lite 🪶 , and mini 🦐 editions are also available in the package.
SDXL ⁘ Compatibility
SDXL variants, such as CyberRealistic XL
The following also work after minor revisions*, with instructions provided in the workflow:
SD 1.5
Hunyuan-DiT (with embedded clip)
Kolors **
OmniGen2 (native)
[* = models are confirmed to run at least a default text-to-image or image-to-image workflow; not all functions, such as ControlNets, work for every model] [** = Kolors takes a few more revisions to run than the others]
SDXL ⁘ Quick Start
Install or update ComfyUI to the very latest version. Follow your favorite YouTube installation video, if needed.
Install ComfyUI Manager.
Download the following model(s) or equivalent(s). Follow the SDXL 1.0 Overview (possibly slightly outdated), if needed.
SDXL VAE —optional
Open the Flexi-Workflow in ComfyUI. You may want to start with one of the reduced editions (e.g., mini 🦐), especially if you are new to ComfyUI.
Use the Manager to Install Missing Custom Nodes. It is recommended to install just a few custom node packages at a time until you get through all of them. You may need to set security_level = normal- (notice the dash/minus!) in the config.ini file to download some custom nodes.
Restart ComfyUI.
Load models (01a) and LoRAs (03c) according to your folder structure.
Run the default text-to-image recipe 🥣.
Enjoy your generated image creations! 😎
BONUS TIP: Drag-and-drop your rendered image back onto the ComfyUI canvas to make additional revisions. This ensures you always have a good default workflow as fallback. 🏅
🛼 Wan v1
A 💪 FlexiVid-Workflow for Wan is currently in development.