ComfyUI beginner friendly Flux.2 Klein 9B GGUF In & Outpaint Workflows with Easy Prompt Saver by Sarcastic TOFU
12
118
5
Verified: 16 days ago
Other
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
This is a set of very simple In & Outpaint Workflows that helps you to save your Flux.2 Klein 9B GGUF Inpainting or Outpainting Generation Data based on only a single reference image into a human readable .txt file. This will automatically get and write your metadata to the .txt file. This workflow set has two files - FLUX2K_9B_Painting_v1.0 (Default Dimensions).json is your inpainting workflow and FLUX2K_9B_Painting_v1.0 (Custom Dimensions).json is your outpainting workflow. You will find all the saved prompt files that it generated with the images inside the Archive (.Zip) that has this workflow set. Also with the Image Saver Simple node used you may embed the workflow itself with each saved image or save the image and workflow for your work separately.
The Flux.2 Klein 4B and 9B models are a new family of high-speed AI image generators that use a "rectified flow" architecture to unify image generation and professional-grade editing into a single, compact package. These models are significantly faster than older versions because they use "step-distillation," which allows them to create high-quality images in just 4 steps—achieving sub-second speeds on modern hardware—rather than the dozens of steps required by previous models. The 4B variant is released under a permissive Apache 2.0 License for both personal and commercial use, while the more powerful 9B variant uses a Non-Commercial License intended for research and personal projects. Both models support 11 native aspect ratios ranging from 1:1 square to 21:9 ultrawide and 9:21 vertical, and they can produce sharp images up to 4 megapixels (such as 2048x2048). To make them even more accessible, there are Q (Quantized) models like the FP8 (8-bit) and NVFP4 (4-bit) versions, which reduce the "brain size" of the model to save memory; specifically, the FP8 version is about 1.6x faster and uses 40% less VRAM, while the NVFP4 version is up to 2.7x faster and uses 55% less VRAM. While you can run the Flux.2 Klein 4B GGUF model on systems with as low as 2GB VRAM to run Flux.2 Klein GGUF model smoothly you would need atleast 6GB VRAM (i.e. Q4 GGUF, the very low end Q2 & Q3 GGUF moel files can't produce quality results, if you have very low end systm with 2GB or 4GB VRAM then you should use Klein 4B GGUF models). I used Q4_K_M file that works well with 6 or 8GB VRAM in this workflow set. This one provide acceptable quality results.
You can download all of your necessary model files used in this workflow set from HuggingFace (Details are mentioned below). Make sure you have latest enough ComfyUI installation and install any necessary nodes for for this workflow using ComfyUI manager and place the correct files in correct places. Also check out my other workflows for SD 1.5 + SDXL 1.0, Pony, WAN 2.1, WAN 2.2, MagicWAN Image v2, QWEN, HunyuanImage-2.1, HiDream, KREA, Chroma, AuraFlow, NoobAI, Illustrious, Lumina2, Z-Image Turbo, Flux.2 Klein 4B, Flux.1 Dev and Kandinsky Image 5 Lite (T2I & I2I) models. Feel free to toss some yellow Buzz on stuffs you like.
How to use this -
#1. Just select your Flux.2 Klein 9B GGUF model files first and
#2. now select your image for editing
#3. on next step enter your image editing instructions. (be very precise & targeted, like examples given)
#4. then select how many output images you want (Change the number besides the "Run" button)
#5. after this select image sampling methods, CFG, steps etc. settings (you may wanna stay with defaults)
#6. finally press the run button to generate. That's it..
Required Files
===============
### Download Link for Flux.2 Klein 9B GGUF Model -
+++++++++++++++++++++++++++++++++++++++++++++++++++
https://huggingface.co/unsloth/FLUX.2-klein-9B-GGUF/resolve/main/flux-2-klein-9b-Q4_K_M.gguf
### Download Link for Flux.2 Klein 9B Text Encoder -
+++++++++++++++++++++++++++++++++++++++++++++++++++++
### Download Link for Flux.2 Klein 9B VAE -
+++++++++++++++++++++++++++++++++++++++++++++
https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/vae/flux2-vae.safetensors
LORA used -
+++++++++++++
UNCROP/INPAINT/OUTPAINT WITH CONTEXT IMAGE for F.2 Klein 9B LORA -
-------------------------------------------------------------------
Unlike my previously released Inpainting/Outpainting workflow for Flux.2 Klein 4B which did not use any dedicated LORA for Inpaint/Outpaint tasks this Flux.2 Klein 9B workflow set uses this LORA to greatly improve the output quality. But, I made this workflow in a way so that you don't need to worry about White masks for inpainting or White paddings for outpainting.. with proper prompt buildup and settings tweak you can get very acceptable results. This workflow set works well on Low VRAM (8GB / 12GB) systems. ( it may even work on 6GB VRAM if you can get proper combo of GGUF model & GGUF Clip file to replace what I have used ).

