Galleries will soon be linked to from this article, but so far we have showcased this effect in LIVE shows with LTXV & Hunyuan Video Generation Models.
Consistent Character/Scene Video Variation
Often when generating video with AI models we hit a "good seed" with a prompt, then maybe we only want a very small change to the expression or lip movement of a character, perhaps a hand gesture. This is where Consistent Video Variations can be useful.
Using an old node created for randomized inpainting, "String Painter" (DJZ-Nodes), we can achieve this. So Far this works great in Mochi1 by Genmo, LTXV from Light Tricks and Tencents Hunyuan Video. (workflows linked)
All you do is generate normally until you find an output you like, then:
1. Fix the seed
2. Inject a 32bit Hex string after your prompt
3. Randomize the Hex String, regenerate the video to create variations
- Attention weight of your prompt vs the HEX string will affect the variation
- The length of the Hex string can also affect the strength of the variation
- The difference from one HEX to the next will also affect the variation
- Increment HEX mode opposed to Random HEX mode is the weakest variation