Sign In

LTX 2.3 Three-Image Reference Video Workflow

Updated: May 11, 2026

character

Download

1 variant available

Archive Other

27.4 KB

Verified:

Type

Workflows

Stats

95

Reviews

Published

May 11, 2026

Base Model

LTXV 2.3

Hash

AutoV2
4028B40A43
default creator card background decoration
AIKSK's Avatar

AIKSK

This workflow is designed for LTX 2.3 three-image reference video generation, giving creators a controlled way to turn multiple visual references into a coherent cinematic video. Instead of relying on only one source image, this workflow uses three separate reference images to guide the final result, making it more practical for character consistency, product presentation, scene control, and short-form AI video production.

The core idea is multi-reference visual anchoring. A single image often cannot provide enough information for a stable video. It may show the character clearly, but not the product. It may show the lighting, but not the intended camera angle. It may show the scene, but not the subject identity. By using three reference images, this workflow gives the model more visual context. One image can define the main character or subject, the second image can define the product, object, clothing, or key design element, and the third image can provide the background, mood, color palette, or scene atmosphere.

This makes the workflow especially useful for commercial-style video generation. For example, creators can use it to build AI influencer clips, beauty product showcases, fashion previews, character-driven advertisements, cinematic product reveals, short social media videos, and Civitai / RunningHub demonstration assets. The prompt can then act as the director, telling the model how the three references should be combined and how the action should develop over time.

The workflow is based on an LTX 2.3 video generation route, using image reference guidance, prompt conditioning, video latent creation, sampling, decoding, and final video export. In a typical use case, the reference images are resized and prepared before being passed into the video generation stage. The model then uses these images as guide signals while following the written prompt. This helps the final output stay closer to the intended visual design instead of drifting into a random text-only result.

The strength of this workflow is not just reference fusion, but reference fusion for video. In image generation, a reference mismatch may only affect one frame. In video generation, that mismatch can become flicker, identity drift, unstable clothing, object deformation, or inconsistent backgrounds. By giving the workflow three visual anchors, creators can improve the chance that the video keeps a stable subject, more coherent design language, and stronger visual continuity.

This workflow is also suitable for PromptRelay-style video planning. The global prompt can define the full scene and visual rules, while local prompt segments can describe the action changes across time. This makes the final video easier to control, especially when the creator wants a product to remain visible, a character to perform a specific action, or the camera to move in a cinematic way.

In short, this is a practical LTX 2.3 three-image reference video workflow for creators who want stronger control than single-image video generation, but a simpler setup than larger multi-reference pipelines. If you want to see how the three references are prepared, how the prompt controls the video, and how the final cinematic output is generated, watch the full tutorial from the YouTube link above.

⚙️ Try the Workflow Online

👉 Workflow: https://www.runninghub.ai/post/2052698977556021250/?inviteCode=rh-v1111

Open the link above to run the workflow directly online and view the generation results in real time.

If the results meet your expectations, you can also deploy it locally for further customization.

🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!

📺 Bilibili Updates (Mainland China & Asia-Pacific)

If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.

📺 Bilibili Video: https://www.bilibili.com/video/BV1DERQBeEm1/

I will continue updating model resources on Quark Drive:

👉 https://pan.quark.cn/s/20c6f6f8d87b

These resources are mainly prepared for local users, making creation and learning more convenient.

⚙️ 在线体验工作流

👉 工作流: https://www.runninghub.ai/post/2052698977556021250/?inviteCode=rh-v1111

打开上方链接即可直接运行该工作流,实时查看生成效果。

如果觉得效果理想,你也可以在本地进行自定义部署。

🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

📺 Bilibili 更新(中国大陆及南亚太地区)

如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

📺 B站视频: https://www.bilibili.com/video/BV1DERQBeEm1/

我会在 夸克网盘 持续更新模型资源:

👉 https://pan.quark.cn/s/20c6f6f8d87b

这些资源主要面向本地用户,方便进行创作与学习。