Sign In

Neta Lumina [TensorCoreFP8]

44

595

9

Updated: Dec 10, 2025

base model

Verified:

SafeTensor

Type

Checkpoint Trained

Stats

60

0

Reviews

Published

Nov 28, 2025

Base Model

Lumina

Hash

AutoV2
94D8B2079F

License:

This page contains fp8 quantized DiT models of Neta Lumina for ComfyUI.

And a fp8 quantized Gemma 2 2b (the text encoder).

All credit belongs to the original model author. License is the same as the original model.


Update (11/27/2025): mixed precision and fp8 tensor core support (mptc).

This is a new ComfyUI feature that supports fp8 tensor core, also with mixed precision.

In short:

Mixed precision: Keep important layers in BF16.

FP8 tensor core support: On supported GPU, much faster (30~80%) than BF16 and classic FP8 scaled models. Because ComfyUI will do calculations in FP8 directly, instead of dequantizing + BF16. torch.compile is recommended.

More info: https://civitai.com/models/2172944/z-image-turbo-tensorcorefp8