Sign In

Runpod WAN 2.1 Img2Video Template - ComfyUI

31
274
13
Updated: Mar 22, 2025
tool
Type
Other
Stats
274
0
Reviews
Published
Mar 3, 2025
Base Model
Other
Hash
AutoV2
5AEF4BEDF5
default creator card background decoration
DIhan's Avatar
DIhan

πŸŽ₯ WanVideo ComfyUI RunPod Setup Guide

This comprehensive guide will walk you through setting up and using the WanVideo ComfyUI environment on RunPod for AI video generation. Wan needs alot or vram to get outputs ata resonable speed. 48GB for $0.44 an hour is a pretty good deal IMO

Nothing to download from here. This is a Runpod Template with all the models and workflows added.

WAN 2.1 Video - ComfyUI Full - T1.0 - Running on CUDA 2.5
https://runpod.io/console/deploy?template=ipbtjo67kk&ref=0eayrc3z

## UPDATE
20/03/25
Notice a Pytorch Bug on using Community Cloud GPUs. 
EDIT: Its with just the 5090 cards with the blackwell architecture

19/03/25
'Error: 'NoneType' object is not callable fix'
I added a depth 0 to ComfyUI and implemented nodes to reduce the container size, but these changes introduced several bugs. I've since removed them, and everything should now work much better.

17/03/25
- Added environment variables to control setup behavior:
  - `SKIP_DOWNLOADS=true`: Skip downloading models
  - `SKIP_NODES=true`: Skip verification custom nodes (nodes are packaged into the container for faster build)

πŸš€ Getting Started

Step 1: Deploying Your Pod

  1. Sign up/login to RunPod

  2. Navigate to "Deploy" β†’ "Template"

  3. Search for "WAN 2.1 Video - ComfyUI Full - T1.0" template

  4. Select the hardware:

    • Recommended GPU: RTX A40 (minimum 48GB VRAM)

    • Storage: 60GB minimum (100GB recommended)

    • Filter GPU's above CUDA 2.4

  5. Click "Deploy" to launch your pod

Step 2: Initial Setup

Once deployed, your pod will automatically:

  • Download all required models (takes ~10 minutes)

  • Install custom nodes

  • Set up the environment

You'll see the following message when setup is complete:

⭐⭐⭐⭐⭐   ALL DONE - STARTING COMFYUI ⭐⭐⭐⭐⭐



Step 3: Accessing Your Environment

From your pod's detail page, access:

  1. ComfyUI Interface:

    • Port 8188 (primary interface for creating videos)

    • Wait for this to turn green after setup completes

  2. JupyterLab:

    • Port 8888 (available immediately, even during setup)

    • Use for file management, terminal access, and notebook interactions

  3. Image Browser:

    • Port 8181 (for managing your output files)

    • View and organize generated videos and images


πŸ’Ύ Managing Models

Downloading Additional Models

This will only download everything you need for Wan_Image2Video_720pAFIX.json workflow. If you want to use the other workflows run the ./download-files.sh from terminal and it will download all the models for Kajai's workflow.

Use the flexible model download system:

  1. Edit the configuration file:

    files.txt
    

  2. Add model entries using this format:

    type|folder|filename|url
    

    Examples:

    normal|checkpoints|realistic_model.safetensors|https://huggingface.co/org/model/resolve/main/model.safetensors
    
    gdrive|loras|animation_style.safetensors|https://drive.google.com/uc?id=your_file_id
    
  3. Run the download script:

    ./download-files.sh
    

WAN 2.1 Models

https://huggingface.co/Kijai/WanVideo_comfy/tree/main

https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files

Pre-installed Models

The template comes with these key models:

  • Wan 2.1 Models:

    • Wan2_1-I2V-14B-720P_fp8_e4m3fn.safetensors (Base video model)

    • wan_2.1_vae.safetensors (VAE)

  • Text Encoders:

    • umt5_xxl_fp16.safetensors (Advanced text encoder)

  • CLIP Vision:

    • clip_vision_h.safetensors (Enhanced vision model)

🎨 Using ComfyUI for Video Generation

Step 1: Load a Workflow

  1. Access ComfyUI interface (port 8188)

  2. Click on the folder icon in the top menu

  3. Navigate to the default workflows folder

  4. Select one of the pre-configured workflows

Step 2: Customize Your Generation

  1. Modify text prompts to describe your desired video

  2. Adjust settings:

    • CFG Scale: 7-9 recommended for quality (higher = more prompt adherence)

    • Steps: 25+ for better quality (more steps = more refinement)

    • Resolution: Start with 512x512 for tests, increase for final outputs

    • Frame count: Determines video length

Step 3: Generate and View Results

  1. Click "Queue Prompt" to start generation

  2. Monitor progress in the ComfyUI interface

  3. When complete, view your video in the output panel

  4. Access all outputs via the Image Browser (port 8181)

πŸ“Š Managing Your Files

Using JupyterLab

  1. Access JupyterLab (port 8888)

  2. The workspace folder contains:

    • /ComfyUI - Main application and models

    • Files for downloading additional models

    • Notebook for image/video browsing

Using Image Browser

  1. Access the browser interface (port 8181)

  2. Browse your generated content by:

    • Creation date

    • Filename

    • Metadata

  3. Right-click on items for additional options (download, delete, etc.)

πŸ”§ Advanced Features

SSH Access

To enable SSH:

  1. Set your PUBLIC_KEY in the template settings before deployment

  2. Connect using the command shown in your pod's connect options

Custom Nodes

The template includes these pre-installed node collections:

  • Workflow utilities (cg-use-everywhere, ComfyUI-Manager)

  • UI enhancements (rgthree-comfy, was-node-suite-comfyui)

  • Video-specific nodes (ComfyUI-WanVideoWrapper, ComfyUI-VideoHelperSuite)

  • Performance optimizers (ComfyUI-Impact-Pack)

πŸ› οΈ Troubleshooting

If you encounter issues:

  1. ComfyUI not starting:

    • Check JupyterLab terminal for logs

    • Ensure models downloaded correctly

  2. Models not loading:

    • Verify files exist in /ComfyUI/models/ directories

    • Check file sizes to ensure complete downloads

  3. Custom node problems:

    • Try reinstalling via ComfyUI Manager

    • Restart your pod if necessary

🎯 Tips for Best Results

  • Use detailed prompts with specific descriptions

  • Increase CFG and step count for higher quality videos

  • Save your successful workflows for future use

  • Monitor VRAM usage and adjust resolution accordingly

  • Use the Image Browser to organize and review your outputs

Need more help? Check the readme.md file in JupyterLab or reach out to the RunPod community!