⚡Flux LoRA Model Cascade Package
Link to my Github for alternative Description.
Also on Huggingface. Feel free to visit me there.
I developed this node package out of the desire to have a logical intermediate step between model and LoRA.
We know the usual and very useful LoRA loaders (stack etc) in ComfyUI. But the idea was to design a kind of “mini-model”.
This is how it works:
We put together a group of LoRAs that we want to use in a generation. But not individually and not as a merge. How then?
Cascade is the magic word!
The node selects the basic LoRA from a directory. All associated LoRAs are located in this directory.
Positiv Org.:
Path: models/loras/flux/cascadeLoRAs/
cas.safetensors
cas2.safetensors
cas3.safetensors
...more possible...
You see by the Image, up to three cascades are chooseable.
We therefore only ever load the basic LoRA, in the example: cas.safetensors.
The rest is done automatically by the node.
Now we can set the effectiveness of our selected cascade, depending on how strong the LoRAs already are. (Varies depending on the training)
It makes sense to use LoRAs that have been trained on trigger words. This can make it easier to design the prompt in some applications.
📁 LoRA Folder Structure Example
/loras/
├── comic_filter.safetensors
├── comic_filter2.safetensors
├── comic_filter3.safetensors
└── glassy_touch.safetensors
If comic_filter
is selected, all its numbered parts will be applied in order.
Important is to select your choosen LoRA´s and rename like in Structure Example above, in your Folder. For the LoRA select 2 and 3 do the same in same Root or organize new Root Folder, however you prefer.
The Node accept in Cascades single LoRA´s too.
You, see - all in one.
✅ Perfect for:
Dynamic workflows without model merging
Testing new LoRA fragments quickly
Creating custom model themes using small LoRA modules
Keeping your generation stack clean and modular
There are other nodes in the package which I have developed. These can be used to monitor the guidance and the prompt. They also create log files, which can be very important for later control and evaluation of professional workflows. But also in archiving, so that you can later trace the connections from generations.
📁 Logging Format
All logs are saved line-by-line in a JSON format under:
/Flux_logs/flux_log.json
Each line contains:
{
"timestamp": "2025-05-01 14:45:00",
"uuid": "14ae3ef6-2bd2-4c5d-b61c-0a2bd7ecfefd",
"model": "<class 'ModelType'>",
"clip": "<class 'ClipType'>",
"guidance": 7.5,
"prompts": [
{
"text": "a cyberpunk city at night",
"weight": 1.2
}
]
}
⚠️ If writing fails, an error will be printed to the terminal:
❌ Error writing to /Flux_logs/flux_log.json: [Errno 13] Permission denied
✅ On success:
✅ Logfile written to: /Flux_logs/flux_log.json
For this purpose, the reactread-node is available in the package to see the LoRAs and status data live, per run.
The load data, the log file confirmation and the integrity check are also output live in the terminal window.
DataSource
The attachments contain the node package, the sample workflow and the readme files. The installation is simple and standard.
The rest should be self-explanatory for experienced Comfy users. If not, I will be happy to answer any questions.
Note: It is not permitted to use this node package for commercial purposes, to sell the node package or to derive monetary benefits from it in any other way. These nodes are MIT-licensed and already registered at Github. They are for the benefit and success of the ComfyUI open source community.
Happy Generation