Sign In

ComfyUI beginner friendly Flux.2 Klein 9B GGUF Text-to-Image Workflow with Easy Prompt Saver by Sarcastic TOFU

Type

Workflows

Stats

89

0

Reviews

Published

Mar 3, 2026

Base Model

Flux.2 Klein 9B

Hash

AutoV2
FCD4C6CD30

The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

This is a very simple workflow that helps you to save your Flux.2 Klein 9B GGUF Text to Image Generation Data into a human readable .txt file. This will automatically get and write your metadata to the .txt file. This workflow also uses Flux.2 Klein Enhancer for quality outputs. You will find all the saved prompt files that it generated with the images inside the Archive (.Zip) that has the workflow. Also with the Image Saver Simple node used you may embed the workflow itself with each saved image or save the image and workflow for your work separately.

The Flux.2 Klein 4B and 9B models are a new family of high-speed AI image generators that use a "rectified flow" architecture to unify image generation and professional-grade editing into a single, compact package. These models are significantly faster than older versions because they use "step-distillation," which allows them to create high-quality images in just 4 steps—achieving sub-second speeds on modern hardware—rather than the dozens of steps required by previous models. The 4B variant is released under a permissive Apache 2.0 License for both personal and commercial use, while the more powerful 9B variant uses a Non-Commercial License intended for research and personal projects. Both models support 11 native aspect ratios ranging from 1:1 square to 21:9 ultrawide and 9:21 vertical, and they can produce sharp images up to 4 megapixels (such as 2048x2048). To make them even more accessible, there are Q (Quantized) models like the FP8 (8-bit) and NVFP4 (4-bit) versions, which reduce the "brain size" of the model to save memory; specifically, the FP8 version is about 1.6x faster and uses 40% less VRAM, while the NVFP4 version is up to 2.7x faster and uses 55% less VRAM. While you can run the Flux.2 Klein 4B GGUF model on systems with as low as 2GB VRAM to run Flux.2 Klein GGUF model smoothly you would need atleast 6GB VRAM (i.e. Q4 GGUF, the very low end Q2 & Q3 GGUF moel files can't produce quality results, if you have very low end systm with 2GB or 4GB VRAM then you should use Klein 4B GGUF models). I used Q4_K_M file that works well with 6 or 8GB VRAM in this workflow. This one provide acceptable quality results.

You can download all of your necessary model files used in this workflow from HuggingFace (Details are mentioned below). Make sure you have latest enough ComfyUI installation and install any necessary nodes for for this workflow using ComfyUI manager and place the correct files in correct places. Also check out my other workflows for SD 1.5 + SDXL 1.0, Pony, WAN 2.1, WAN 2.2, MagicWAN Image v2, QWEN, HunyuanImage-2.1, HiDream, KREA, Chroma, AuraFlow, NoobAI, Illustrious, Lumina2, Z-Image Turbo, Flux.2 Klein 4B, Flux.1 Dev and Kandinsky Image 5 Lite (T2I & I2I) models. Feel free to toss some yellow Buzz on stuffs you like.

How to use this -

#1. Just select your Flux.2 Klein 9B GGUF model files first and

#2. set your desired image dimensions to start

#3. then input your desired image prompt.

#4. select how many images you want (Change the number besides the "Run" button)

#5. select image sampling methods, CFG, steps etc. settings

#6. finally press the run button to generate. That's it..

** LORA usage for this workflow is optional you can use it without any LORAs, use with 1 or 2 or any other number of LORAs, to add new LORAs press the L button on top to lunch LORA Manager on a new tab find your LORA and if you want to use that LORA just click the upward Kite button.

Required Files

===============

### Download Link for Flux.2 Klein 9B GGUF Model -

+++++++++++++++++++++++++++++++++++++++++++++++++++

https://huggingface.co/unsloth/FLUX.2-klein-9B-GGUF/resolve/main/flux-2-klein-9b-Q4_K_M.gguf

### Download Link for Flux.2 Klein 9B Text Encoder -

+++++++++++++++++++++++++++++++++++++++++++++++++++++

https://huggingface.co/Comfy-Org/flux2-klein-9B/resolve/main/split_files/text_encoders/qwen_3_8b_fp8mixed.safetensors

### Download Link for Flux.2 Klein 9B VAE -

+++++++++++++++++++++++++++++++++++++++++++++

https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/vae/flux2-vae.safetensors

LORAs used -

+++++++++++++

The Detail Slider - Klein Edition LORA -

-----------------------------------------

https://civitai.com/models/2326084?modelVersionId=2622287

On the very last generated image I did not use the detail slider LORA, instead I used these three LORAs :

NSFW - FLux Klein LORA (Nsfw solo girl-v2) -

---------------------------------------------

https://civitai.com/models/2319552?modelVersionId=2677698

NippleDiffusion - Flux2.Klein (General [9B] v2) -

--------------------------------------------------

https://civitai.com/models/2331032?modelVersionId=2646182

PussyDiffusion - Flux2.Klein (General [9B] v2) -

-------------------------------------------------

https://civitai.com/models/2337198?modelVersionId=2645238