Sign In

💪 Flexi-Workflow ⁘ Flux · SDXL [ Illustrious · Pony ] · et al.

43
1.2k
26
Type
Workflows
Stats
169
0
Reviews
Published
May 12, 2025
Base Model
Flux.1 D
Hash
AutoV2
1C50E0B839
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

Introduction

💪 This is my attempt at a flexible and extensible workflow framework in variants of Flux and SDXL. Additional models also work after minor revisions, with instructions provided in the workflow:

Many customizable pathways are possible to create particular recipes 🥣 from the available components, without unnecessary obfuscation (e.g., noodle convolution, stacking nodes over others, etc.) and arguably capable of rendering results of similar quality to more complicated specialized workflows.

The workflow was developed and tested on the following system:

  • Operating system: Linux Mint 21.3 Cinnamon with 62 GB RAM

  • Processor: 11th Gen Intel© Core™ i9-11900 @ 2.50GHz × 8

  • Graphics card: NVIDIA GeForce RTX 3060 with 12 GB VRAM

  • Browser: Google Chrome

The following provides some sample local render times for the default settings, after model loading and using sage attention (--use-sage-attention):

  • SDXL —light ⁘ text-to-image ⁘ 896x1152 = 19–23 sec.

  • Flux Dev —light ⁘ text-to-image ⁘ 896x1152 = 56–64 sec.

  • HiDream Fast (modded SDXL —mini workflow) ⁘ text-to-image ⁘ 896x1152 = 275–330 sec. (4½–5½ min.)

  • Wan video (incl. in Flux and SDXL workflows) ⁘ acceleration settings enabled (e.g., CausVid LoRA) ⁘ I2V ⁘ 49 frames (3 sec.) ⁘ 480x624 = 690–700 sec. (~12 min.)

Please report bugs 🪲 or errors 🚫, as well as successes 🤞 and requests/suggestions 📝. Post and share your SFW creations!!! I spent a lot of time working on this project (((for no 💰))), so I hope others make good use of it and find it helpful.

🌌 Flux [v6]

The 💪 Flexi-Workflow for Flux is highlighted by Flux Tools 🛠️: Fill (optionally with ACE++), Canny & Depth, and Redux. Basic ControlNets are also available, including IPAdapter and regional control, along with two face swap tools (PuLID and ZenID), detailing, relighting, facial expression editor, and RF inversion. Additional accessories include four captioners (Florence2, JoyCaption2, WD14 Tagger, and SuperPrompt), wildcards, three background removers (RemBG, RMBG, and Florence2 + SAM2), XY plots, five upscalers (basic, Thera, InvSR, Ultimate SD, and SUPIR; with tiled diffusion nodes also available), three video generators (DepthFlow, talking avatar using Float Animator or SONIC, and Wan), and 3D viewing (Anaglyph Tool). A simple interface for Gemini AI is included. Core 🦴 , lite 🪶 , and mini 🦐 editions are also available in the package.

Flux ⁘ Compatibility

[* = requires minor revisions to the workflow using custom nodes]

Flux ⁘ What's New?

  • [v6.0] Removed requirements for the following custom nodes packages, after full removal or equivalent replacement of nodes: Bleh, Custom-Scripts, EasyControl, Florence2, InstantCharacter, memory_cleanup, multigpu, patches II, SideBySide_StereoScope, utils, Various nodes, and WAS Node Suite

  • [v6.0] Removed stand-alone torch compile nodes in Loaders group.

  • [v6.0] Removed EasyControl and InstantCharacter groups; similar abilities exist through Gemini and the (upcoming?) Flux.1 Kontext [dev] model.

  • [v6.0] Replaced the save image node with the equivalent from comfyui_image_metadata_extension.

  • [v6.0] Replaced SideBySide_StereoScope nodes with equivalents from anaglyph.

  • [v6.0] Replaced multigpu loader nodes with their equivalents from GGUF (city96) — the multigpu gguf loader nodes kept throwing OOM errors when attempting to generate longer Wan videos, so I just replaced all of them

  • [v6.0] Replaced memory cleanup nodes with their equivalents from easy-use

  • [v6.0] Added model compatibility instructions: only Flux and SDXL workflows will be maintained going forward since they can run other models after only minor revisions needed

  • [v6.0] Added Float Animator as an additional talking avatar processor.

  • [v6.0] Added TaylorSeer node to the Loaders group (01a).

  • [v6.0] Added IPAdapter and Wildcards groups.

  • [v6.0] Reworked inpainting with crop-and-stitch, including improved helper instructions

  • [v6.0] Reworked the Wan video group and loaders.

  • [v6.0] Cleaned up workflow significantly, particularly the Captioner and BG Removal groups

  • [v5.4] Improved handling of masks in Redux. [v5.4] Added additional resize node to Load image input (02a) to better facilitate tiled diffusion (upscaling) of larger images. [v5.3] Added tiled diffusion node for an additional upscaling option. [v5.3] Added SD3 model sampling node, which may improve rendering even in Flux. [v5.3] Removed requirement for Crystools nodes. [v5.2] Added latent combo option and cleaned up latent group. [v5.1] Removed Golden Noise node. [v5] Greatly simplified the Wan video group, now limited to image-to-video and first-last-frame-to-video options; a more robust separate Wan workflow is in development. [v5] Added InstantCharacter 🚧, but the current implementation is experimental and exceeding my available VRAM to even run. [v5] Reconfigured second sampling, which should offer more flexibility. [v5] Replaced image output comparison node, which works better and offers more functions. [v5] Redux now uses ReduxFineTune node, for more intuitive controls. [v5] Stereoscope now offers rendering of videos.

  • [v4] Added Gemini AI, facial expression editor, and Thera upscaler. [v4] Replaced Hunyuan video with Wan 2.1 video: employs (mostly) native nodes · text-to-video, image-to-video (default), and video-to-video (using ControlNet) options · ControlNet Fun and LoRA models implemented; VACE not yet available · simple upscaling and interpolation [v4] Replaced OmniGen with EasyControl 🚧, but the current implementation is experimental and exceeding my available VRAM to even run. [v4] MutiGPU loaders now default, except for Wan where they seemed to be a source of instabilities. [v4] Overhauled ControlNets + groups: simplified Redux · restructured basic ControlNets to allow three different models concurrently · regional control that respects different LoRAs [v4] Cleaned up and improved workflow: more color-coding of nodes · better organization and sorting of bookmarks · added global seed node · added simple latent operations (between samplers) · fixed default masking bug · upgraded inpainting crop-and-stitch · added model switch, for easier implementation of specialized recipes

Flux ⁘ Known Issues

  • [v6] The text-to-speech audio nodes (PC-ding-dong) don't seem to work most of the time.

Flux ⁘ Quick Start

  1. Install or update ComfyUI to the very latest version. Follow your favorite YouTube installation video, if needed.

  2. Install ComfyUI Manager.

  3. Download the following models or equivalents. Follow the Quickstart Guide to Flux.1, if needed.

  4. Open the Flexi-Workflow in ComfyUI. You may want to start with one of the reduced editions (e.g., mini 🦐), especially if you are new to ComfyUI.

  5. Use the Manager to Install Missing Custom Nodes. It is recommended to install just a few custom node packages at a time until you get through all of them. You may need to set security_level = normal- (notice the dash/minus!) in the config.ini file to download some custom nodes.

  6. Restart ComfyUI.

  7. Load models (01a) and LoRAs (03c) according to your folder structure.

  8. Run the default text-to-image recipe 🥣.

  9. Enjoy your generated image creations! 😎

Flux ⁘ Navigation

The workflow is structured for flexibility. With just a few adjustments, it can flip from text-to-image to image-to-image to inpainting or application of Flux Tools 🛠️. Additional unlinked nodes have been included to provide options and ideas for even more adjustments, such as linking in nodes for increasing details. (The workflow does not employ Anything Everywhere, so if a node connection looks empty, it really is empty.)

In the Switchboard, flip the yes|no 🔵 toggles to activate or deactivate groups and the jump arrows ➡️ to quickly move to particular groups for checking and making adjustments to the settings/switches.

🛑 DO NOT RUN THE WORKFLOW WITH ALL SWITCHES FLIPPED TO "YES"! 🛑

There are also bookmarks 🔖 to help you navigate quickly.

In the rgthree settings, it is also recommended to show fast toggles in group headers for muting.

In the Lite Graph section of the settings, enable the fast-zoom shortcut and set the zoom speed to around 1.5–1.75. The workflow was built with a snap to grid size of 20.

Most of the workflow is unpinned 📌, so grab any empty space with your mouse (while pressing the control key) to navigate around. You are welcome to pin 📌 anything to prevent accidentally moving groups or nodes.

Flux ⁘ Recipes

This is the default text-to-image recipe 🥣 and should be run first to make sure you have the basics configured correctly.

💪 ⁘ Toggle to "yes" 01a; 02b; 03 all; 04; and 05
03a ⁘ Latent switch = 1 (empty)
03b ⁘ Conditioning switch = 1 (no ControlNets +)
03e ⁘ Denoise = 1; Guidance = 1.2–5; Steps 20–30, or 8–12 w/ Turbo LoRA
05 ⁘ Image switch = 2 (save generated image)

Once you have the workflow running, it is recommended to drag-and-drop rendered images back into ComfyUI to make additional workflow adjustments. This ensures you always have a good default workflow as fallback.

Reference the Start Here group to find additional workflow recipes 🥣.

🌄 SDXL [v6]

The 💪 Flexi-Workflow for SDXL is based on the Flux variant and includes almost all of the same components and functionality. Core 🦴 , lite 🪶 , and mini 🦐 editions are also available in the package.

SDXL ⁘ Compatibility

The following also work* after minor revisions**, with instructions provided in the workflow:

  • SD 1.5

[* = confirmed to run at least the default workflow of the mini 🦐 edition][** = getting Kolors to run takes a few more revisions than the others]

SDXL ⁘ Quick Start

  1. Install or update ComfyUI to the very latest version. Follow your favorite YouTube installation video, if needed.

  2. Install ComfyUI Manager.

  3. Download the following model(s) or equivalent(s). Follow the SDXL 1.0 Overview (possibly slightly outdated), if needed.

  4. Open the Flexi-Workflow in ComfyUI. You may want to start with one of the reduced editions (e.g., mini 🦐), especially if you are new to ComfyUI.

  5. Use the Manager to Install Missing Custom Nodes. It is recommended to install just a few custom node packages at a time until you get through all of them. You may need to set security_level = normal- (notice the dash/minus!) in the config.ini file to download some custom nodes.

  6. Restart ComfyUI.

  7. Load models (01a) and LoRAs (03c) according to your folder structure.

  8. Run the default text-to-image recipe 🥣.

  9. Enjoy your generated image creations! 😎


🛼 Wan [v5]

A 💪 FlexiVid-Workflow for Wan is currently in development.