Updated: Feb 6, 2026
base modelQuantized fp8 circlestone-labs/Anima for ComfyUI.
It contains calibrated metadata for hardware fp8 linear. If you GPU supports it, ComfyUI will use hardware fp8 automatically, which should be a little bit faster. More about hardware fp8 and hardware requirement, see ComfyUI TensorCoreFP8Layout.
All credit belongs to the original model author. License is the same as the original model.
Just ignore ComfyUI log warnings about tons of keys not loaded. Its a small bug in ComfyUI, it checked wrong keys. Those keys are metadata and they are loaded.
fp16 patch: A plugin/patch for ComfyUI to let you run anima with fp16 on old gpus.
v1.2: set model default dtype to fp16. The "ModelComputeDtype" node is not needed any more. In theory, it would be faster, because ComfyUI will convert all weights to fp16 when loading, no more on-the-fly converting, which iirc, old gpus are not good at it.
A direct fix to ComfyUI seems not easy. This hot patch may exist for some time.
Because of fp16_accumulation (?), fp16 is little bit (~10%) faster on my 4xxx card than bf16.

