Sign In

ComfyUI - Text to 2 Keyframes to Video

Loading Images

I used Comfyui to generate a single image to create a a character with the following prompts:


Character Generation:


Positive prompts: Beautiful, pink hair, realistic, anime, big boobs, regal, big boobs, full body, grey background, prominent nipples, naked
Negative prompts: blurry, weird hands, ugly, missing arm, missing limb, monochrome

Steps: 21

Sampler: Euler

Seed: 12

This generated this character:

IPAdapter:

I then fed this character into IPAdapter with a weight of 3.0 and a PLUS Adapter, and updated my model with the embeddings created.


Keyframe creation:

I then fished through some prompts to generate these keyframes

Keyframe 1:
Keyframe 2:

Controlnet:

I then added 2 Canny lineart controlnets (1 per keyframe image) and 2 Tile controlnets (also 1 per keyframe image) with 0.4 and 0.5 weight, as well as keyframe interpolation to travel between controlnet influences.

AnimateDiff:

I used the standard mm_sd_v15_v2.ckpt motion model, and generate 16 empty latent image, then fed them into a KSampler with the mentioned parameters.

This was done without any uploaded images, but you could use this workflow to use embed pre-existing characters into the IPAdapter instead of creating one during the flow. You could also use pre-created keyframes in the workflow instead of fishing for them and generating them within the workflow. I did create custom some group nodes for the character and keyframe generation, but those are just basic txt2img prompts.

If someone can point me to where I can upload just my JSON workflow, I can upload the full ComfyUI JSON workflow and you can run it yourself within ComfyUI. I believe the JSON workflow is embedded into the shared PNG, and can be imported using the https://github.com/pythongosssss/ComfyUI-Custom-Scripts custom plugin.

Comments