Updated: Dec 22, 2025
base modelfp8 quantized Newbie-image for ComfyUI.
All credit belongs to the original model author. License is the same as the original model.
TensorCoreFP8 (tcfp8):
Scaled fp8 + Mixed precision + Hardware fp8 support
On supported GPU, ComfyUI will automatically do calculations in FP8 directly, instead of dequantizing + BF16. torch.compile is recommended, if you can get it up and running.
More info about hw fp8: https://civitai.com/models/2172944
Old:
Scaled fp8 + Mixed precision. Does not support hardware fp8 (no calibration data).
Gemma 3 4b:
Scaled fp8 + Mixed precision.
Jina clip (TBD):
Jina clip is very small, seems not necessary.

