Sign In

[DO NOT USE, OUTDATED] Lumina 2 optimization plugin for Comfyui

8

Nov 9, 2025

(Updated: 21 hours ago)

tool guide

A very basic ComfyUI plugin to patch Lumina 2 model, no GUI and node, because I don't know how to write ComfyUI node.

Update:

(12/2/2025): ComfyUI added it's own fp16 impl/hack in 0.3.77

And "TorchCompileModelAdvanced" nodes in "ComfyUI-KJNodes". Support torch.compile Lumina 2 and Z-Image.

This plugin can rip.

(11/26/2025): Be careful of compatibility issues after comfyUI 0.3.75.

TLDR: Alibaba, the company behind qwen just dropped it's next unbelievably efficient model called z-image, which is very similar to lumina 2 and Comfyui is constantly modifying lumina 2 code base.

v1.1 (11/23/2025):

  • Fixed Comfyui "IMPORT FAILED" warning. (Just a warning that ComfyUI can't find any node in the file, not a bug, but annoying)

  • Print log when patching code.

  • More optimizations for torch.compile.


This plugin has two optimizations:

Enable torch.compile:

Maybe 30% faster. Especially on newer gpu (rtx 3xxx and later).

Note: When you enabled torch.compile the first time, it needs to compile the model. Usually takes 60~120s . The process bar will seem to be stuck at step 0 for a long time. This is normal. Do not cancel the job. This only happens once.

Why not use ComfyUI built-in torch compile node? As of writing this (11/10/2025), the built-in torch compile node in ComfyUI does not work properly. For some reason it has no effect. And I don't know how to fix it. So I added a simple implement in this plugin.

Enable FP16 support for old gpu:

Lumina 2 does not support fp16, only bf16 and fp32. If you have an old GPU card and it doesn't support bf16 (rtx 2xxx and before). ComfyUI running the model in fp32 mode to prevent overflow, which is extremely slow.

You can enable the setting "FP32_FALLBACK", the model will then automatically recompute overflowed block in fp32. So you can safely use fp16 mode without overflow (black image output).

fp16 mode maybe 3~4x faster than fp32, depends on your hardware.

Note: You need to use the built-in "ModelComputeDtype" node in your workflow to forcibly change the model compute mode to fp16.

How to use:

  • Put the attachment py file in the ComfyUI "custom_nodes" dir.

  • Open it with a text editor. Change settings. Save it.

  • Then restart your ComfyUI.

Note:

  • This file will directly patch Lumina 2 code when ComfyUI is loading. There is no node. Those settings take effects immediately and globally.

  • Those settings enable hardware acceleration. They do not change the output of the model (as well as it's quality).

8