IMPORTANT UPDATE:
After the Comfy update my custom node always gives error like "There is no CUDA kernel available" no matter what I try to do for fix.
I can only recommend all of you guyz and gals to search for "qwen 3.5 vl" in Comfy registry of the nodes.
I am very sorry, but I have no exact idea what causes it (googling and githubbing didn't make me know the root cause).
As for "heretic support" it is easy to add heretic model into the available QWEN 3.5 VL node from registry. It is needed to edit the file with the list of models by adding the "someuser/model-name" element into the array of model name in one or two .py files and add a SYSTEM_PROMPT options. I did it on my side but have no resources to make another release of the modified node that will easily adoptable for everyone's setup.
Minimalistic custom node that implements QWEN 3 VL image captioning (uncensored).
Highlights:
15-17 seconds per image
No local LM Studio or remote VLM service required
Smart memory management by ComfyUI
System Prompt can be specified
Supports 'cuda', 'cuda:0', 'cuda:1', 'cpu' devices (not yet fully tested yet)
By default uses the Qwen 3 VL model that was abliterated (de-censored) using "Heretic" technology.
Instructions:
If you downloaded full 6.58Gb archive then you need to split it.
Folder "Qwen3-VL-4B-Instruct-heretic-7refusal" must be placed into path: "ComfyUI/models/prompt_generator/".
Archive "ComfyUI-Qwen3VL.zip" must be unzipped into "ComfyUI/custom_nodes".
Restart ComfyUI backend and frontend (browser) after the manipulations above.
Good luck!
Comments are welcome.

