Redcraft DX3 ZIB Distilled LoRA in Rank-256 format. The LoRA weight can be adjusted to adapt it to various ZIB fine-tune models, fully compatible with the Z-Image(non-turbo) base model.
Z-Image-Distilled V3 2026/2/15
DF11 Lossless Compression RedZDX V3 came out, learn more: Dynamic-length Float (DFloat11)
Thanks to mingyi456/Z-Image-Distilled-DF11-ComfyUI
Z-Image-Distilled V3 2026/2/15
Thanks to Bubbliiiing, VideoX-Fun& Alibaba-PAI Provided us with a more efficient distillation solution
https://huggingface.co/alibaba-pai/Z-Image-Fun-Lora-Distill
Speed of Light, Power of Flow: The new ZID v3 "Lucis" is powered by the latest ZIB acceleration. Building on ZID v2 trainning sets, we've distilled a more efficient Zimage-based RedDX3. Now, in just 5 steps, you get solid results.
Rapid Prototyping: Test LoRA training hypotheses instantly with 'near-zero' latency.
Stochastic Pre-sampling: Serve as a high-speed, high-entropy source for ZiTurbo pipelines.
Hybrid Workflows: Pair seamlessly with Klein 9B for cascaded refinement or ensemble generation.
inference cfg: 1.0-1.5(建议1.0)
inference steps: 5(5-15步)
sampler / scheduler: Euler / simple

