Type | Workflows |
Stats | 735 |
Reviews | (18) |
Published | Sep 8, 2024 |
Base Model | |
Hash | AutoV2 4CF1318341 |
If your VRAM is insufficient for Flux, you need to run a quantized version. This is a really simple workflow with LoRA load and upscale. Keep in mind that the quantized versions need slightly higher strength values than the normal ones.
This workflow is based on the GGUF model loader in ComfyUI:
https://github.com/city96/ComfyUI-GGUF.
Update:
Added upgraded "Simple" version. It will requre 2 Custom Nodes to be installed. What is differrent in it:
Added Multi-LoRA suppor with the rgthree LoRA stacker. This is the best pick for low end video cards I've been able to find.
Added Civit-AI friendly file saver with the requred supporting nodes.
Orginised everything in groups a little bit.
It is still reqlly easy to use and now it is a good starting point for more complex Workflows as the generation info will be saved for Civit even if you do more complex operations.