How it works:
It takes your single input image
resizes it
Alter the image based on the affects you select (Image flip x axis, zoom by cropping, Flip Y axis, Image flip X+Y axis, Pixilate)
generates 2 videos using frame interpolation (1st half going forward, 2nd half going backwords)
Applied aftereffects based on selected options
stitches the 2 videos together creating a continuous loop.
Looped i2v - LTXV Frame interpolation
-------------------------------------------------------------------------------------------------------------
Model links:
Load Checkpoint node -- ltx-video-2b-v0.9.5.safetensors -- https://huggingface.co/Lightricks/LTX-Video/blob/main/ltx-video-2b-v0.9.5.safetensors -- put it in your ComfyUI/models/checkpoints folder
Load CLIP node -- t5xxl_fp16.safetensors -- https://huggingface.co/Comfy-Org/mochi_preview_repackaged/blob/main/split_files/text_encoders/t5xxl_fp16.safetensors -- put it in your ComfyUI/models/text_encoders/ folder.
-------------------------------------------------------------------------------------------------------------
Custom Nodes:
all nodes are installable through comfy Ui manager.
one thing you may need to manually search for is ComfyUI-LTXVideo, type ComfyUI-LTXVideo into manager search if it does not get auto detected. make sure to install the one that says ComfyUI nodes for LTXVideo model. by ✅ lightricks and not the one with lora at the end. The rest of the nodes should auto detect in the manager when clicking install missing custom nodes.
-------------------------------------------------------------------------------------------------------------
Use:
Set your inputs and choose your starter image.
Set a positive and negative prompt
Set effects and aftereffects in control panel area.
press queue
All editable options are in the Input / control panel area.
you can alter cfg in STG Guider nodes to 1 if you find the generating takes too long on your machine.
You can try up the steps in LTXVScheduler for potentially better results
To try a different seed on the same image, alter the seed in the control panel.
Additional tips from testing:
Using a starter image with larger dimensions gives much higher quality in the video output even if scaling the image down. From my testing using a starter image with 1920(w)x1080(h) seems to be optimal for giving high quality video outputs. If your computer can handle it, pushing the steps up to 100 for both videos get incredibly good results.
Prompt really matters in this workflow, setting a generic or short prompt will result in a simple camera pan, typing motion or action focused prompt allows character and movement to flow more naturally, I would recommend putting your starter image into ChatGPT and asking it to describe what the video would look like.