Sign In

LTX-2.3 DEV/DIST - IMAGE to Video and TEXT to Video with Ollama/RTX VSR

Updated: Mar 31, 2026

toolvideoaudioollamai2vt2v

Type

Workflows

Stats

1,773

0

Reviews

Published

Mar 11, 2026

Base Model

LTXV 2.3

Hash

AutoV2
AE86FA2FAE
Howling Aurora
tremolo28's Avatar

tremolo28

V2.5 LTX-2.3 DEV & Distilled Video with Audio

Image to Video and a Text to Video workflow, both can use own Prompts or Ollama generated/enhanced prompts.

  • works with latest LTX 2.3 Distilled model (8 steps, CFG=1) or Dev model (20 steps, CFG=3)

  • Updated the processing for DISTILLED and DEV model, select the DIST or DEV model in loader node and switch to dedicated DIST or DEV processing pipeline, so each model has its own processing.

    • DIST model pipeline: Standard Guider and Basic Scheduler, follows the manual sigmas issued by Lightricks

    • DEV model pipeline: MultiModal Guider and LTX Scheduler + Distilled Lora on latent upscaler

  • Included a workflow version with "RTX Video Super Resolution" node, which upscales videos in highspeed, NVIDIA RTX GFX card required!

installation of RTX VSR via Comfyui. If you have issues installing via Comfyui, install manually:

  • 1. git clone https://github.com/Comfy-Org/Nvidia_RTX_Nodes_ComfyUI into Comfyui/custom_nodes

  • 2. Go to comfyui_portable/python_embeded/ and enter below to install dependencies:

    • python.exe -m pip install -U --no-build-isolation nvidia-vfx --index-url https://pypi.nvidia.com

Tip: With latest Comfy and LTX updates, the processing got faster for me, so I can increase the scale_by in sampler node from 0.5 to 0.6 or higher to have crisper videos with minor impact on render time.


V2.3 LTX-2.3 DEV & Distilled Video with Audio

Downloads for LTX 2.3:


smaller GGUF Dev or Dist. models work as well. (replace Checkpoint loader node with Unet loader node from this custom node: https://github.com/city96/ComfyUI-GGUF ):


V1.5 LTX-2 DEV Video with Audio including latest 🅛🅣🅧 Multimodal Guider

Image to Video and a Text to Video workflow, both can use own Prompts or Ollama generated/enhanced prompts.

Replaced the Guider node with latest Multimodal Guider node, see more details in WF notes or here: https://ltx.io/model/model-blog/ltx-2-better-control-for-real-workflows Before we had 1 CFG parameter for audio and video. With multimodal guider, we now can tweak audio and video seperately with even more parameters...


V1.0 LTX-2 DEV Video with Audio:

Image to Video and a Text to Video workflow with own Prompts or Ollama generated/enhanced prompts.

  • setup for the LTX2 Dev model.

  • uses Detailer Lora for better quality and LTX tiled VAE to avoid OOM and visual grids

  • 2 pass rendering (motion+upscale). Upscale process uses distilled and spatial upscale Lora

  • setup with latest LTXVNormalizingSampler to increase video & audio quality.

  • Text to Video can use dynamic prompts with wildcards.


I am using these starting parameters for ComfyUi to avoid OOM (my setup: 16g Vram/64g Ram) :

--lowvram --cache-none --reserve-vram 6 --preview-method none

=> OBSOLETE with latest Comfy updates for better memory management:


Download LTX 2 Files: (Workflow V1.0 and V1.5 only)

Find Model/Lora Loader nodes within Sampler Subgraph node.

- LTX2 Dev Model (dev_Fp8): https://huggingface.co/Lightricks/LTX-2/tree/main

- Detailer Lora: https://huggingface.co/Lightricks/LTX-2-19b-IC-LoRA-Detailer/tree/main

- Distilled (lora-384) & Spatial upscaler Lora: https://huggingface.co/Lightricks/LTX-2/tree/main

- VAE (already included in above dev_FP8 model, but needed if you go for GGUF models): https://huggingface.co/Lightricks/LTX-2/tree/main/vae

- Textencoder (fp8_e4m3fn): https://huggingface.co/GitMylo/LTX-2-comfy_gemma_fp8_e4m3fn/tree/main

- Image to Video Adapter Lora (more motion with I2V): https://huggingface.co/MachineDelusions/LTX-2_Image2Video_Adapter_LoRa/tree/main

- Ollama Models:

Save Location:

  • 📂 ComfyUI/

  • ├── 📂 models/

  • │ ├── 📂 checkpoints/

  • │ │ ├── ltx-2-19b-dev-fp8.safetensors

  • │ ├── 📂 text_encoders/

  • │ │ └── gemma_3_12B_it_fp8_e4m3fn.safetensors

  • │ ├── 📂 loras/

  • │ │ ├── ltx-2-19b-distilled-lora-384.safetensors

  • │ └── 📂 latent_upscale_models/

  • │ └── ltx-2-spatial-upscaler-x2-1.0.safetensors

  • │ └── 📂 Clip/

  • │ └── ltx-2.3_text_projection_bf16.safetensors


Custom Nodes used:


Ollama help:

  1. Install Ollama from https://ollama.com/

  2. download a model: Go to a model page, chose a model , then hit the copy button, i.e. https://ollama.com/huihui_ai/qwen3-vl-abliterated

  3. open terminal and paste the model name, i.e.: ollama run huihui_ai/qwen3-vl-abliterated

  4. model will be downloaded and can be selected in green comfy node "Ollama Connectivity". Hit "Reconnect" to refresh.