Note: This is not the base Z-Image model, but the native full FP32 weights of Z-Image Turbo.
Using the excellent repo by PixWizardry (credit: https://github.com/PixWizardry/ComfyUI_Z-Image_FP32), I was able to merge those shards into a proper single FP32 safetensors file.
Why FP32? I train my LoRAs on the full-precision version for maximum quality. Initially, I was disappointed with the results, AI Toolkit (Ostris) gave promising previews during training, but inference in ComfyUI looked noticeably watered down and less vibrant.
The reason: AI Toolkit loads and uses the full FP32 model internally for training/sampling, while ComfyUI was limited to the BF16 version... until this merge made FP32 available!
Now, with this FP32 checkpoint, LoRA results match across both tools.. sharper, more consistent, and closer to the training previews.
Additional clip, not a must but it gives me more fidelity with the merge simple node: here
UltraFluxVAE better colors overall
If you're interested in my Z-Image Turbo LoRA training guide (including configs for AI Toolkit), check it out here: full guide.
Enjoy! 🚀
a cool node I found that works well with this combo, (Clip attention multiply). I got better results using it. still tweaking the settings for it! post here

