Sign In

HxSVD - HarrlogosxSVD txt2video ComfyUI Workflow - Generate and Animate Text with SVD! (v2 OUT NOW!)

HxSVD - HarrlogosxSVD txt2video ComfyUI Workflow - Generate and Animate Text with SVD! (v2 OUT NOW!)

HxSVD - HarrlogosxSVD txt2img2video workflow for ComfyUI

Updating the guide momentarily!

HxSVD is a custom built ComfyUI workflow that generates batches of 4 txt2img images, each time allowing you to individually select any to animate with Stable Video Diffusion.

1. Requirements

To use this workflow you will need:

ComfyUI - (

ComfyUI Node Manager (

HarrlogosXL - SDXL LoRA (

SVD XT - SVD model (

This will work with just about any XL checkpoint, though I highly recommend Dynavision and StarlightXL.
It will even work with most other LoRAs.

2. Instructions

1. The first render is done like any other, and will place 4 images in the preview chooser.

(Side note: I highly recommend selecting the auto batch feature in ComfyUI, as the preview chooser node will then control the queue.
To send more than one image from a batch through, just unselect Auto queu before progressing it.)

2. Click any image you would like to send through to SVD to be animated, highlighting it in green:

If you've chosen to animate an image, select "Progress selected image". This will send your selection through img2video, and generate the next batch of images.
If you would like to send another image from the same batch, de-select the Auto Queue option before you hit Progress, and turn it back on before sending the final image.

Otherwise, select "Cancel current run" to generate the next batch.

3. Important Notes -

For the best experience, install ComfyUI Custom Scripts (, which implements a gallery along the bottom of your UI, and populates it with every image you've generated that session. This way, you can save any of the images you'd like manually, drag and drop any image right back in for the metadata, and you can just reload the page to clear it.

This is the reason for the other Preview node in the workflow, as that's what places the images in the on screen gallery.

You can always bypass this node, or switch it with a save node if you like to auto save all your images.

The workflow saves the SVD animations in mp4 format by default. I prefer to wind up with a GIF personally, but exporting straight to GIF severely lowers the quality. If you export to mp4 and then convert to GIF, you wind up with a higher quality animation and a small file size.

Make sure to stop your run, and clear any existing jobs in the queue. If you have a run paused at the image chooser node, and go to switch to another workflow, that job can get stuck in your queue, and there is no way to operate comfy without restarting your cmd window.

4. Using Harrlogos -

There is a detailed guide on how to use Harrlogos, the prompt structure, built in activation words, and tips/tricks to get better images and more accurate text on the HuggingFace model card, or description on the CivitAI model page. There you can also find a massive gallery of images people have created with the model, many of which including the prompt/settings to help you learn new ideas.

5. Using SVD -

Anything I know about SVD is just what I've gathered from my experience in using it. It is largely unpredictable, but there are ways to assist it in producing more interesting animations. The settings that come with the workflow are what I've found to work best for this purpose, but of course feel free to change them if that works better for you.
The two main factors are the init image, and the seed of the SVD pass. The content of the init image will determine what SVD uses as the "layers" of the image, in terms of any depth/parallax effects. You can also give SVD specific things, and it will potentially just do cool things with them. This has worked with things like cars, planes, fire, water/waves, hair in the wind, etc. As you work with it, you'll start to see what works in the init images to give it.
Secondly there's the seed of the SVD pass. This is what will determine what motion/effect happens with the layers from the image. There's effectively two classes of effects, what I call "zoomies", and everything else. "Zoomies", for all intents and purposes, are failures in my eyes. They will either do a simple zoom in and out, or camera pan, basically just a perspective movement of the image as a whole. You don't need SVD to do this, so it's a waste of it's ability. The goal is to achieve anything other than this. What I like to do is find a seed that makes a cool animation, and fix the SVD sampler on that value. I've found that this highly increases the chance of getting more interesting animations on average. Here are a couple seeds I've found luck with before:
If you don't get a good animation, you can always send the image back through SVD with a new seed, and get a new effect.

6. Examples (For v1 ONLY)-

Here are 5 different templates to get you started with creating images with Harrlogos.

All the PNG's are all in the Example Templates zip attached to this article, so you can just drag and drop the images into your ComfyUI window and the metadata will populate all the parameters. Click the links to see how this workflow animates them with SVD:

1. Rainbow Pixel Art template
2. Comic Book in Space template
3. Graffiti King template
4. Goosebumps Style template
5. Computer Chip template

Version 2 templates coming soon!

6. Special Thanks-

Wanted to say thanks to everybody who helped me test the workflow along the way, especially my homies over on the Banodoco Discord server!
To Daflon (daflon6100 on Discord - u/Silly_Goose6714 on reddit) for helping me build the extra features in to v2, it would not have been possible without their assistance.
CivitAI for giving me a place to distribute the workflow.

Make sure to tag me so I can see all the incredible things you guys make with this.

Kofi - TikTok - Instagram - Patreon