Sign In

Wan2.1(GGUF) only 4GB-VRAM ComfyUI Workflow

431

20.2k

271

Type

Workflows

Stats

8,359

0

Reviews

Published

Mar 5, 2025

Base Model

Wan Video

Hash

AutoV2
A6FBFC8050

Video Generation on a Laptop

Hello!
This workflow utilizes a few custom nodes from Kijai and other sources to ensure smooth performance on an RTX 3050 Laptop Edition with just 4GB of VRAM. It's optimized to improve generation length, visual quality, and overall functionality.

🧠 Workflow Info

This is several ComfyUI workflow capable of running:

2.0-ALL -- Includes all workflows:

  • Wan2.1 T2V

  • Wan2.1 I2V

  • Wan2.1 Vace

  • Wan2.1 First Frame Last Frame

  • Funcontrol (experimental)

  • Funcameraimage (experimental)

Coming soon: Inpainting experimentals get updated

πŸš€ Results (Performance)

*to be updated

πŸŽ₯ Video Explainer (Vace edition):

πŸŽ₯ Installation Guide (V1.8):

πŸ“¦ DOWNLOAD SECTION


Note: rgthree Only needed for Stack Lora Loader


πŸ“¦ Model Downloads

*these are conversions from the original models to run on less VRAM.

All these GGUF conversions are done by:

https://huggingface.co/city96

https://huggingface.co/calcuis

https://huggingface.co/QuantStack

*If you cant find the model you are looking for check out there profiles!


🧩 Additional Required Files (Do not downlaod from Model Downloads)


πŸ“₯ What to Download & How to Use It

βœ… Quantization Tips:

  • Q_5 – πŸ”₯ Best balance of speed and quality

  • Q_3_K_M – Fast and fairly accurate

  • Q_2_K – Usable, but with some quality loss

  • 1.3B models – ⚑ Super fast, lower detail (good for testing)

  • 14B models – 🎯 High quality, slower and VRAM-heavy

  • Reminder: Lower "Q" = faster and less VRAM, but lower quality
    Higher "Q" = better quality, but more VRAM and slower speed


🧩 Model Types & What They Do

  • Wan Video – Generates video from a text prompt (Text-to-Video)

  • Wan VACE – Generates video from a single image (Image-to-Video)

  • Wan2.1 Fun Control – Adds control inputs like depth, pose, or edges for guided video generation

  • Wan2.1 Fun Camera – Simulates camera movements (zoom, pan, etc.) for dynamic video from static input

  • Wan2.1 Fun InP – Allows video inpainting (fix or edit specific regions in video frames)

  • First–Last Frame – Generates a video by interpolating between a start and end image


πŸ“‚ File Placement Guide

  • All WAN model .gguf files β†’
    Place them in your ComfyUI/models/diffusion_models/ folder

  • ⚠️ Always check the model's download page for instructions β€”
    Converted models often list exact folder structure or dependencies

πŸ”— Helpful Sources:

Installing Triton: https://www.patreon.com/posts/easy-guide-sage-124253103

Common Errors: https://civitai.com/articles/17240

Reddit Threads:

https://www.reddit.com/r/StableDiffusion/comments/1j1r791/wan_21_comfyui_prompting_tips https://civitai.com/articles/17240

https://www.reddit.com/r/StableDiffusion/comments/1j2q0xw/dont_overlook_the_values_of_shift_and_cfg_on_wan

https://www.reddit.com/r/comfyui/comments/1j1ieqd/going_to_do_a_detailed_wan_guide_post_including

πŸš€ Performance Tips

To improve speed further, use:

  • βœ… Xformer

  • βœ… Sage Attention

  • βœ… Triton

  • βœ… Adjust internal settings for optimization


If you have any questions or need help, feel free to reach out!
Hope this helps you generate realistic AI video with just a laptop πŸ™Œ