Sign In

Tutorial - to 'Sprite Sheet Maker' Workflow; from 'TheGeekyGhost'

3

Dec 30, 2025

workflows
Tutorial - to 'Sprite Sheet Maker' Workflow; from 'TheGeekyGhost'

Background / Why this tutorial exists

This workflow comes with very little usable documentation.
https://civitai.com/models/448101?modelVersionId=591027

Even ChatGPT could not really help, and for TheGeekyGhost the workflow logic is obvious, so from his perspective there was not much to explain.

For me, however, it took two full days to fully understand how it actually works.
Since other users also asked for a tutorial, I decided to write one myself, with images and clear steps.


- Required additional resources (step by step)


Pose reference pack (linked in the original WF description)

You need to download the pose images linked in the workflow description.

https://civitai.com/models/56307?modelVersionId=63973

Inside that download you will find two main folders:

  • 512 by 512 crops

  • full sized base images

Each of these contains 4 subfolders:

  • 2 running animations

  • 2 walking animations

Inside those, you will find direction-based subfolders, for example:

  • L = Left

  • R = Right

  • B = Back
    etc.

👉 Take some time to explore the folder structure, it is important to understand how it is organized.

⚠️ This is critical
The IP Adapter reads the motion skeletons from here.


IP Adapter – required models

You need several IP Adapter related models.

Official repo and instructions:
https://github.com/cubiq/ComfyUI_IPAdapter_plus?tab=readme-ov-file#installation

Képernyőkép_2025-12-29_17-00-33.png

Minimum required models (space-saving setup)

If you want to save disk space, this is the minimum setup.

OR

Note: with this setup, the IP Adapter strength will be strong. You need choice 'PLUS (high strength)'.

You need select in the 'Second group' also!

Képernyőkép_2025-12-29_18-18-18.png

OR

Note: with this setup, the IP Adapter strength will be medium, not maximum. You need choice 'VIT-G (medium strength)'.

You need select in the 'Second group' also!


AnimateLCM (AnimateDiff / AnimateCM)

I only tested this workflow with the following model:

https://huggingface.co/wangfuyun/AnimateLCM/tree/main

File:

AnimateLCM_sd15_t2v.ckpt

Place it here:

ComfyUI/models/animatediff_models/

⚠️ I did not test other AnimateDiff / AnimateLCM models — others might work, but this one is confirmed.


controlnet Models - openpose

Can download:

https://huggingface.co/Lucetepolis/FuzzyHazel/tree/main

File:

controlnet11Models_openpose.safetensors

Place it here:

ComfyUI/models/controlnet/

⚠️ I did not test other models — others might work, but this one is confirmed.


Required background removal model

You also need to download the RMBG model:

  1. Create this folder if it does not exist:

ComfyUI/models/rembg
  1. Download the model file from:
    https://huggingface.co/briaai/RMBG-1.4/tree/main

  2. Rename the file exactly as:

briarmbg.pth

⚠️ Important:

  • Do not use .safetensors or any other file name.

  • The workflow will throw an error if the file is named incorrectly.

This model is required for the automatic background removal steps in the workflow.


Automatic model downloads on first run

On the first run, the workflow will automatically download additional required models.

This is expected behavior.

For example:

ComfyUI/models/rembg/isnet-anime.onnx
100%|████████████████████████████████████████| 176M/176M [00:00<00:00, 830GB/s]

If you see downloads like this in the console, nothing is wrong — just let the process finish.

After these models are downloaded once, subsequent runs will be faster.


Model note (important)

The workflow’s default model, “GeekyGhost LCM V2”, is no longer available.

Good news:

  • It works with the V1 version

  • And in theory any LCM SD 1.5 model can be used

I tested several of them.

⚠️ Important:
Non-LCM models produced poor or unusable results, so I strongly recommend sticking to LCM SD 1.5 models.


Final step

After downloading and placing all required files,
👉 restart ComfyUI.


After restart: configuring the workflow

After restarting ComfyUI, we should now be able to configure everything properly.

Képernyőkép_2025-12-29_17-20-54.png

In the screenshots:

  • 🔴 Red circles show where models must be selected (check)

  • 🟧 This must be checked in both groups:

    • the peach-colored “First pass”

    • the gray “Second pass”

  • In picture center: Load Checkpoint - Best the LCM models.

  • under: IPAdapter preset (VIT-G or PLUS)

  • a little up and to the left: Remove Bakground model - briarmbg!

  • top-right: AnimateDiff model - AnimateLCM_sd15_t2v; or you can try another LCM models

  • center-right: Controlnet model


Source image

  • 🟢 Green circles mark the starting image

  • This is the image from which the workflow generates the sprite

You must provide this image in both groups as well.

💡 Tip:
After a few seconds, you can see the result of the background removal in the rmbg Preview.
If the background removal is very poor:

  1. Fix it manually

  2. Use the cleaned image as the new source image

This greatly improves the final result.

Képernyőkép_2025-12-29_20-16-31.png

Generating directions (Left, Right, Back, etc.)

Each movement direction must be generated separately.

Because of this, I strongly recommend:

  • When you get a result you like, set the Seed to fixed

  • Write the seed value down

  • Use the same seed to generate the other directions

This helps keep the character consistent.


Folder selection per direction

This is how it works:

In the First pass group

  •   In the blue-circled node:

Load Images / VHS

Select (unzipped folders):

characterWalkingAndRunning_betterCrops/
full sized base images/
woman walking bones/
L

using “choose folder to upload”

👉 This example is for L = Left.

In the Second pass group

Select:

characterWalkingAndRunning_betterCrops/
512 by 512 crops/
woman walking bones/
L

Again, L for Left.

When generating another direction:

  • Change both folders accordingly
    (for example from L to R)


Running the workflow

If everything is set correctly:

  • Press Run

  • The workflow will generate:

    • a video (GIF)

    • and then a sprite sheet

📝 Note:

  • In the First pass group, the VideoCombine node is completely unnecessary

  • You can safely bypass or delete it


Saving the sprite sheet

In the bottom-left corner of the workflow, you will find: "Save image to local"

Képernyőkép_2025-12-29_20-01-14.png

Here you must specify the folder where the sprite sheet should be saved.

Examples:

  • Windows:
    C:\your\folder\

  • Linux:
    /your/folder

⚠️ If you do not set this:

  • The images will end up in the temp folder

  • Everything else is saved to the default output directory


Final notes

I hope I didn’t miss anything.
If you get stuck, feel free to reach out.

I’m currently searching for better OpenPose collections
if you know any good ones, I’d be happy to hear about them!

Good luck, and happy sprite making 🚀

3