I'm putting relevant stuff I find here since at the moment it's so spread all over the place it's hard to find the stuff that actually works.
If you have further suggestions just add them to the comments and lets try to centralize.
1.LTX Simple Distilled Image to Video
This workflow allows you to quickly iterate through gens in relatively low resolution and then upscale once you find a gen that you like.
I used green to color everything that might be of interest to edit, and also grouped the most common settings together.
It should work with any version of distilled. If you're running into memory issues, you can try loading the model with fp8_e4m3fn weights.
How to use:
Set the red toggle to false while you iterate through gens.
Once you are happy with your gen, enable the refiner and also in the seed node click the recycle icon to use the previously generated seed.
There is a sigma cheat sheet in the refiner section. Using more aggressive sigmas can help motion issues from the first gen but might make your character lose identity; use the refine preset unless you're trying to fix something.
Very important!
In order for this to work reliably, you NEED to have a GOOD prompt; here's how to get it;
Take the starting image and drop it into any gpt
Put the prompt you want in and tell it: "Enrich this prompt starting from the provided image"
You should get something that is about the length of a raging twitter response that has the image details baked in (clothing, background, character description: hair etc)
You can of course use a VL node in comfy to achieve this as well:
