Sign In

Animatediff 3 vid2vid Openpose

6

169

3

Type

Workflows

Stats

169

0

Reviews

Published

Mar 3, 2025

Base Model

SD 1.5

Hash

AutoV2
DD71722854
default creator card background decoration
Arunderan's Avatar

Arunderan

Howto

Load input video. Adjust prompt, size and length. Then Generate.

Description

This workflow here is about a Video to Video prompt traveling workflow across Open Pose to extract the human motion from an input video. So it has a very special purpose. And it works with AnimateDiff 3. You choose different prompts for different keyframes. And in the video the prompts morphs into each other.

AnimateDiff and Stable Diffusion 1.5, which are used in conjunction, are trained at 512 pixels in size. So you better won’t go this much higher in creation size. The result will become inconsistent. What you can try is to upscale afterwards.

The workflows comes with upscaling. But i recommend to upscale in an extra workflow. By a second Ksampler. See this article here:

https://www.tomgoodnoise.de/index.php/video-upscaling-in-comfyui/

Time

The example with a length of six seconds and a resolution of 640×480 finished in around 25 minutes at an 4060 TI with 16 gb vram. The relative long generation time comes from the second ksampler in the chain, which you can leave away. But it adds some quality. Higher upscaling will increase the time even more. So this workflow is not the fastest.

Requirements

These workflows were generated with 8 gb vram back in the days. However, i ran too often into an OOM. I highly suggest a card with at least 12 gb vram.