I generate the input image with a pdxl model but you can use your favorite t2i, i use v-prediction for maximize creativity, i'm using my favorite noise chain for t2i thats 1 step fe_heun3, 1 step SamplerSonarDPMPPSDE(student-t), 2 steps lcm (uniform), for better first step you can use the SamplerDPMAdaptative node thats left alone, it's optimized to go fast, but you can play with it, for the second step you can prolonge the lcm(uniform) for more smooth results but less creative or add a SamplerRES_Momentumized(highress-pyramid) and finish with 2 steps of lcm(uniform), you can also try the ClownSampler node for step 2 to get a different result, the lcm(uniform) can also be changed for the ClownSampler but i really like what lcm(uniform) does. Now for the ltx video, you don't require the sampler chain, but if you want the best from the model, experimenting is your best bet, also, the cfg modulates the movement, the consistency and the artifacts, you may as well experiment with different cfgs for each 1/3 of the generation, thats also a reason for the split sigmas, that improve the generation by a lot