Sign In

Wan2.1 SkyReelsV2 VACE GGUF 14B 720P 24FPS MoviiGen AccVid CausVid I2V Extend & Loop

48
869
13
Updated: Jun 7, 2025
tool
Type
Workflows
Stats
178
0
Reviews
Published
Jun 6, 2025
Base Model
Wan Video 14B i2v 720p
Hash
AutoV2
42F3A2F671
default creator card background decoration
lym0's Avatar
lym0

Wan2.1 SkyReelsV2 VACE GGUF with MoviiGen AccVid CausVid LoRAs (14B 720P 24FPS I2V, Extend & Loop)

This ComfyUI workflow supports:

  • Image-to-Video (I2V) and Loopable Video Extension (V2V) generation using SkyReels-V2-VACE-GGUF

  • Using AccVid and CausVid LoRAs with 2 samplers for faster generations

  • Using MoviiGen and Rewards LoRAs for better performance

You can:

  • Generate the first video as your starting point

  • Extend the video one at a time to gradually build out the full sequence

  • Cherry-pick the best segments for your final cut

  • Refine prompts step-by-step as the scene or motion evolves


[2025-06-06] v2.0.0 Major Updates: SkyReels V2 (720p, 24fps) and Loopable Video

🔧 Components

🌀 SkyReels-V2-VACE-GGUF (by Finanzamt_Endgegner)

  • Based on Wan2.1, fine-tuned in 720p@24fps videos

  • Integrated VACE (All-in-One Video Creation and Editing framework) allows motion control using reference videos (like ControlNet for video)

  • Native support in ComfyUI via GGUF format

  • Temporal consistency across the full sequence


LoRA Models

Speed

Quality


▶️ How to Use


🖼️ To Generate Video from an Image as First Frame

  1. Enable "First Frame" from the muter node

  2. Upload your input image

  3. Set generation parameters:

    • Prompts (positive/negative)

    • Shift

    • Steps

    • Seed

    • Width / Height

    • Length (frame count)

    • Sampler

    • Scheduler

  4. Click Run


🎥 To Extend or Loop an Existing Video

  1. Enable "Video Extension" or "Video Loop" option

  2. Upload or select your input video (via Load Image node, in animated webp format for optimal quality)

  3. Set extension parameters:

    • Overlap Frame Count

    • Extension Frame Count

    • Prompts (positive/negative)

    • Shift

    • Steps

    • Seed

    • Sampler

    • Scheduler

  4. Click Run

Number of Frames for Continuation (Adjust as Needed)

  • Overlap Frames: Number of frames carried over from the original animation.

    • Higher values increase temporal consistency and preserve the flow from the previous segment.

    • Lower values may result in more abrupt transitions or sudden changes in motion, tempo, or direction.

  • Extension Frames: Number of new frames to generate beyond the current animation.


⚠️ Challenges and Limitations

The base model is a T2V model, not a true I2V model.

The I2V is achieved by feeding a reference image into the VACE node, rather than directly preserving the image.

An I2V model typically keeps the input image as the exact first frame.

Here, VACE treats the image as loose guidance, not strict visual preservation

Examples:

  • If your source image lacks an object, but your prompt includes it, that object might be added to the first frame.

  • If the prompt contradicts the image, some original elements may be missing.

  • Fine details may degrade over time, especially in extended video generations.


📂 References


FAQ (Frequently Asked Questions)

❓ Can I run this with 16GB VRAM?

Yes. I ran it on an RTX 5060 Ti with 16GB VRAM using the Q6_K GGUF model.

With GGUF models, you can choose a version that fits your GPU memory:

  • Q3_X_X (3-bit) for ~8GB VRAM

  • Q4_X_X (4-bit) for ~12GB

  • Q5–Q6 for ~16GB

  • Q8 for ~24GB+

👉 Model & hardware info: https://huggingface.co/QuantStack/MoviiGen1.1-VACE-GGUF

❓ Why do I get errors or bad video clips?

This workflow is still experimental, so crashes or poor results are common. Here are some tips:

  • OOM (out of memory) error = your GPU doesn’t have enough VRAM

  • Use a lower quant model (e.g. Q3 or Q4) to reduce memory usage

  • Lower the video resolution or clip length to avoid overload

  • If transitions look bad, try adjusting the prompt or other settings

  • Generate multiple times, then pick the best clips to stitch together

❓ Why does it give an error with certain resolutions?

The "WanVaceToVideo" node only accepts resolutions where both width and height are divisible by 16. If your input resolution doesn’t meet this requirement, you’ll likely run into errors or processing failures.

Below are safe resolutions for commonly used aspect ratios, based on standard output heights (320, 368, 480, 544, 640, 720):

Recommended Aspect Ratios & Resolutions (All values divisible by 16)

  • 🖥 32:9 -> 1136x320

  • 📽 21:9 -> 752x320, 864×368, 1120×480, 1264×544

  • 🖼 2:1 -> 640x320, 736×368, 960×480, 1088×544, 1280×640

  • 📺 16:9 -> 576x320, 656×368, 848×480, 960×544, 1136×640, 1280×720

  • 🖥 16:10 -> 512x320, 592×368, 768×480, 864×544, 1024×640, 1152×720

  • 📷 3:2 -> 480x320, 560×368, 720×480, 816×544, 960×640, 1088×720

  • 🖼 4:3 -> 432x320, 496×368, 640×480, 720×544, 848×640, 960×720

  • 🖼 5:4 -> 400x320, 464×368, 608×480, 688×544, 800×640, 896×720

❓ What should I do if I get a “Request Entity Too Large” error when uploading an image?

This error typically occurs when the file size exceeds the upload limit. To work around it:

  1. Place the WebP file directly into the ComfyUI\input folder.

  2. In ComfyUI, press Reload (R) to refresh the file list.

  3. Use the Load Image node to select the file instead of using the “Choose file to upload” option.