Sign In

Super Simple GGUF (Quantized) Flux LoRA Workflow

49
2.3k
14
Updated: Sep 8, 2024
tool
Type
Workflows
Stats
1,276
Reviews
Published
Sep 8, 2024
Base Model
Flux.1 D
Hash
AutoV2
4CF1318341
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

If your VRAM is insufficient for Flux, you need to run a quantized version. This is a really simple workflow with LoRA load and upscale. Keep in mind that the quantized versions need slightly higher strength values than the normal ones.

This workflow is based on the GGUF model loader in ComfyUI:
https://github.com/city96/ComfyUI-GGUF.

Update:

Added upgraded "Simple" version. It will requre 2 Custom Nodes to be installed. What is differrent in it:

  1. Added Multi-LoRA suppor with the rgthree LoRA stacker. This is the best pick for low end video cards I've been able to find.

  2. Added Civit-AI friendly file saver with the requred supporting nodes.

  3. Orginised everything in groups a little bit.

It is still reqlly easy to use and now it is a good starting point for more complex Workflows as the generation info will be saved for Civit even if you do more complex operations.