Sign In

Lumina 2 optimization plugin for Comfyui

2

Nov 9, 2025

(Updated: an hour ago)

tool guide

A very basic ComfyUI plugin to patch Lumina 2 model, no GUI and node, because I don't know how to write ComfyUI node.

They said Lumina 2 is too slow.

Noop, in fact Lumina 2 can be almost (80%) as fast as SDXL, if you use:

This plugin has two optimizations:

Enable FP16 support for old gpu:

Lumina 2 does not support fp16, only bf16 and fp32. If you have an old GPU card and it doesn't support bf16 (rtx 2xxx and before). ComfyUI running the model in fp32 mode to prevent overflow, which is extremely slow.

You can enable the setting "FP32_FALLBACK", the model will then automatically recompute overflowed block in fp32. So you can safely use fp16 mode without overflow (black image output).

fp16 mode maybe 3~4x faster than fp32, depends on your hardware.

Note: You need to use the built-in "ModelComputeDtype" node in your workflow to forcibly change the model compute mode to fp16.

Enable torch.compile:

Maybe 30% faster. Especially on newer gpu (rtx 3xxx and later).

Note: When you enabled torch.compile the first time, it needs to compile the model. Usually takes 60~120s . The process bar will seem to be stuck at step 0 for a long time. This is normal. Do not cancel the job. This only happens once.

How to use:

  • Put the attachment py file in the ComfyUI "custom_nodes" dir.

  • Open it with a text editor. Change settings. Save it.

  • Then restart your ComfyUI.

Note:

  • This file will directly patch Lumina 2 code when ComfyUI is loading. There is no node. Those settings take effects immediately and globally.

  • Those settings enable hardware acceleration. They do not change the output of the model (as well as it's quality).

2