Sign In

Rebels Sulphur 2 GGUF (LTX-2.3 NSFW Model)

Updated: May 9, 2026

base modelnsfwltx 2.3sulphur

Download

1 variant available

Archive Other

6.05 KB

Verified:

Type

Workflows

Stats

1,312

Reviews

Published

May 7, 2026

Base Model

LTXV 2.3

Hash

AutoV2
B6DF87E6B8

What did you think of this resource?

GOONERS REJOICE

LTX-2.3 Sulphur (NSFW MODEL) Distil Workflow — Installation Guide

This workflow runs on smthemex's ComfyUI_LTX2_SM custom node pack with the sulphur_distil distilled transformer GGUF. Below is everything you need to install before queuing the workflow.


1. Custom Nodes

Open a terminal in your ComfyUI custom_nodes directory and clone:

git clone https://github.com/smthemex/ComfyUI_LTX2_SM.git

Then install requirements with the embedded Python (portable users):

cd ComfyUI_LTX2_SM
..\..\..\python_embeded\python.exe -m pip install -r requirements.txt

Or with your venv Python if you're not on portable. Restart ComfyUI fully after install.

Make sure ComfyUI itself is updated to the latest stable — older builds won't have the Gemma / GGUF text encoder plumbing this pack relies on.

Repo: https://github.com/smthemex/ComfyUI_LTX2_SM


2. Models

All four core files come from smthem/LTX-2.3-test-gguf: https://huggingface.co/smthem/LTX-2.3-test-gguf/tree/main

⚠️ Important: Do NOT use a generic Gemma 3 GGUF from Google, Bartowski, Unsloth, etc. The smthemex loader expects HuggingFace-style tensor names. Standard llama.cpp-format GGUFs will throw an UnboundLocalError: embed_tokens_key on load. Only use the Gemma GGUF from the smthem repo above.

Transformer (Sulphur Distil)

File: sulphur_distil-Q6_K.gguf (18.1 GB) Folder: ComfyUI/models/gguf/

Goes in gguf/, not unet/ or diffusion_models/.

OPTIONAL LoRA (FOR VANILLA LTX-2.3)

do not use the lora with the model. the lora is for the regular vanilla LTX-2.3 model in case you have too low of vram to run this NSFW model by itself. the lora works, just not as well as the full model does. DO NOT USE THE LORA WITH THE MODEL! lol

https://huggingface.co/Seregil13th/Sulphur-2-base/blob/main/sulphur_lora_rank_768.safetensors

Text Encoder (Gemma 3)

File: gemma-3-12b-it-qat-Q4_0.gguf (8.7 GB) Folder: ComfyUI/models/gguf/

Connector

File: connector.safetensors (6.34 GB) Folder: ComfyUI/models/checkpoints/

film_net_fp16.safetensors goes in "frame_interpolation" folder

Video + Audio VAEs

Two options — either source works:

Option A — from the smthem repo (same page as everything else):

  • ltx-2.3-22b-distilled_video_vae.safetensors (1.45 GB) → ComfyUI/models/vae/

  • ltx-2.3-22b-distilled_audio_vae.safetensors (365 MB) → ComfyUI/models/vae/

Option B — from Kijai's LTX2.3_comfy repo: https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/vae

Grab the matching video and audio VAE files from the vae/ subfolder and drop them in ComfyUI/models/vae/.


3. Final Folder Structure

ComfyUI/
└── models/
    ├── gguf/
    │   ├── sulphur_distil-Q6_K.gguf
    │   └── gemma-3-12b-it-qat-Q4_0.gguf
    ├── checkpoints/
    │   └── connector.safetensors
    └── vae/
        ├── ltx-2.3-22b-distilled_video_vae.safetensors
        └── ltx-2.3-22b-distilled_audio_vae.safetensors

4. Hardware Notes

The smthemex repo lists 6 GB VRAM + 48 GB RAM (peak) as the spec, leaning on the streaming offload code. If you have less system RAM, enable a generous Windows pagefile or you'll hit OOM on the transformer load step. The Q6_K transformer alone is ~18 GB before the encoder and connector come in — there is no shortcut around the RAM requirement.

I HIGHLY RECOMMEND UPDATING YOUR BAT FILE WITH THESE FLAGS:
--lowvram --disable-xformers --use-pytorch-cross-attention --reserve-vram 2 --disable-smart-memory

(these flags will help with the text encoder tricking the push back onto cpu and will burn your vram as priority first)


5. Troubleshooting

  • UnboundLocalError: embed_tokens_key → wrong Gemma GGUF. You need gemma-3-12b-it-qat-Q4_0.gguf from the smthem repo specifically. See warning above.

  • Connector not appearing in dropdown → it goes in models/checkpoints/, not models/clip/ or models/text_encoders/.

  • Sulphur GGUF missing from dropdown → it goes in models/gguf/, not models/diffusion_models/ or models/unet/.

  • Out of memory on load → check pagefile size; this workflow benchmarks at ~48 GB peak system memory.


If you hit issues outside this list, drop a comment with the full ComfyUI console traceback (not just the popup) and the file you put in each folder.