Type | |
Stats | 597 0 |
Reviews | (52) |
Published | May 22, 2025 |
Base Model | |
Trigger Words | They started looking at each other and moved closer, then they started kissing. |
Hash | AutoV2 F76D7F37E4 |
To use this Lora, you'll need to find portraits of two people and then use an image stitching tool to combine them into a single picture before using the I2V. When stitching the images, pay attention to balancing the head sizes of the characters and the distance between them—unless you want to see a big head kissing a small head or the video cutting off right before they kiss. You’ve got to set the stage for their kiss.
You can also try using full-body shots since I’ve included training data for them, but I haven’t tested this myself. It’d be great if someone could share examples of it works with full-body images.
The reason I made this Lora is that getting characters to kiss without it—especially when they need to cross the borders of the image while keeping the background relatively stable—is really tough.
I stopped the training process when I felt it was about right and picked the checkpoint with the highest epoch number. For testing, I stitched together some real photos I found online to check the Lora’s performance, but I don’t think I can share those—it might get me into trouble. Still, I can tell you the results were seriously impressive.
trigger word:
“They started looking at each other and moved closer, then they started kissing.”
The example prompts and trigger words were just made up randomly the first time I used them—I didn’t even filter or refine them. You can use your own prompts instead.
The main purpose of trigger words is to help align the pixel vector paths with the motion distribution probabilities during training. For users, they’re not really that important.