Z-Image Turbo needs 20GB+ VRAM normally, but GGUF makes it work on 6GB cards.
Here's everything you need:
---
## ⚙️ Critical Settings
| Parameter | Value | Note |
|-----------|-------|------|
| CFG Scale | 1.0 | Higher = burned image |
| Steps | 8-10 | Turbo model |
| Sampler | euler | |
| Scheduler | simple | |
| Resolution | 1024x1024 | Max for 6GB |
| Batch Size | 1 | |
---
## 📦 Models for 6GB VRAM
| Component | File | Link |
|-----------|------|------|
| UNET | z_image_turbo-q3_k_s.gguf | [HuggingFace](https://huggingface.co/city96/Z-Image-Turbo-gguf) |
| Text Encoder | qwen3-4b-q4_k_m.gguf | [HuggingFace](https://huggingface.co/city96/Qwen3-4B-gguf) |
| VAE | ae.safetensors | [HuggingFace](https://huggingface.co/Comfy-Org/z_image_turbo) |
⚠️ Required node: [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF)
- ✅ Legible text on signs
- ✅ Bilingual prompts (EN + CN)
- ✅ ~15 sec per image on 6GB
---
## 📥 Workflows + Full Guide
I've prepared 4 ready-to-use workflows for different setups:
| Workflow | For Who |
|----------|---------|
| 6GB Low VRAM | RTX 3060 |
| Basic BF16/FP8 | 12GB+ cards |
| GGUF Variant | 8GB cards |
| All-in-One | Single checkpoint users |
Plus the full guide includes:
- Many test prompts with results
- Step-by-step screenshots
- FAQ
- Troubleshooting section
I can't fit the entire guide into one Civitai post due to size limits. The article is over 3000 words long, includes multiple JSON files, and has a lot of photos. 😅
Full article available with all downloads: https://smartart.live/articles/260-how-to-run-z-image-gguf-on-6gb-gpu-free-beginner-guide.html
---
Questions? Ask below 👇

