Sign In

LTX-2.3 (newest LTX) using distilled GGUF - I2V

Updated: Mar 7, 2026

tool

Type

Workflows

Stats

965

0

Reviews

Published

Mar 7, 2026

Base Model

LTXV2

Hash

AutoV2
6AEF6DA484

Version 3.0 has been updated to use LTX 2.3

LTX 2.3 should provide some enhancements such as sharper video

Better audio

Etc..

The normal models are HUGE, can't be used by most people.

This workflow used GGUF for model and text endcocder, gemma.

Trying to keep footprint small, but its getting hard to do.

**************LTX 2.3 VERSION*************

https://huggingface.co/Lightricks/LTX-2.3/tree/main (upscaler)

https://huggingface.co/QuantStack/LTX-2.3-GGUF/tree/main/LTX-2.3-distilled (main gguf)

https://huggingface.co/Kijai/LTX2.3_comfy/tree/main (vae, text projector)

./models/text_encoders

gemma-3-12b-it-Q2_K.gguf (this is only 4GB for low vram)

ltx-2.3_text_projection_bf16.safetensors

./models/unet (DISTILLED version, distilled only needs 8 steps)

LTX-2.3-distilled-Q5_K_S.gguf (Distilled version provides benefits for low vram)

./models/vae (2.3)

LTX23_audio_vae_bf16.safetensors

LTX23_video_vae_bf16.safetensors

./models/latent_upscale_models (2.3)

ltx-2.3-spatial-upscaler-x2-1.0.safetensors

Models are in a Subgraph.