Sign In

Unlimited length I2V (Framepack + LoRA support !)

41
604
10
Updated: Apr 24, 2025
toolimg2videoi2vframepack
Type
Workflows
Stats
345
0
Reviews
Published
Apr 24, 2025
Base Model
Hunyuan Video
Hash
AutoV2
56472652C8
default creator card background decoration
Ez4M's Avatar
Ez4M
Tencent Hunyuan is licensed under the Tencent Hunyuan Community License Agreement, Copyright © 2024 Tencent. All Rights Reserved. The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate.
Powered by Tencent Hunyuan

Unlimited Length I2V — ComfyUI Workflow

Welcome to the Unlimited Length I2V workflow for ComfyUI, pushing the boundaries of video generation by leveraging the FramePack system to produce videos of virtually unlimited length (i.e. number of frames are no longer limited to 96 for previous implementations of Hunyuai or even Wan).
Just a few weeks ago, this kind of output would have been impossible — now, it's a matter of a few nodes.

⚠️ This is a first working draft. Expect massive improvements soon (see below).


🚀 What It Does

This workflow uses FramePack to perform image-to-video (I2V) generation with long, coherent sequences. By combining the original FramePack I2V architecture with the modular flexibility of ComfyUI and support from native models, this setup opens new creative possibilities for animating images far beyond the usual frame count limitations.

It currently features :

  • experimental LoRA support!

  • automatically resize input image to nearest supported format

  • end frame support

  • any input resolution accepted (will be rounded to the nearest valid one)

  • LLM use for image description

  • teacache use

I also tried to explain each setting with a note directly on the workflow. No need to keep this page open when using it !


🔧 Dependencies

To run this workflow, you need the following:

  • Nvidia GPU in RTX 30XX, 40XX, 50XX series that supports fp16 and bf16. The GTX 10XX/20XX are not tested.

  • 6GB of VRAM (yes, only ! it can work on your laptop !)

🧩 Required ComfyUI Custom Node

📦 Model & Resource Downloads

1. Native models (text encoders, VAE, sigclip):

2. Transformer (FramePack) model:


☕ Optional Feature: Teacache

Teacache is a smart caching system for diffusion models that stores intermediate computation states. This drastically speeds up generation times, especially during iterative tweaking or when generating multiple video segments with similar inputs.

The workflow includes a switch to enable or disable Teacache, depending on your memory availability and whether you're prioritizing speed or full fresh runs.

Teacache boost: Up to 2x speed improvement on repeat runs


Update infos

If you come from the v0.1 or v0.2 of my workflow, you need to update kijai/ComfyUI-FramePackWrapper to dev branch.
Go to ComfyUI/custom_nodes/ComfyUI-FramePackWrapper

on powershell / Bash, type :

  • git switch dev

  • git pull

You would of course need git for this.

LoRA support is highly experimental at this point. You can of course only use HunYuan video LoRA, and the effect is quite ... random. The explanation at this point is that all these LoRA were trained on very short videos (due to original limitations), and this impact high frame videos like the one generated with FramePack. I'll try to improve this in the future (not a limitation of the workflow though, but from the original FramePack implementation).


⚡ Benchmark Results

Tested on my "old" RTX 3090:

  • Resolution: 704x544

  • Length: 150 frames

  • Generation time: 11 minutes

Another test :

  • 384x448, 600 frames generated on 15 minutes.

The original project claims that with an RTX 4090 desktop it generates at a speed of 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache)


🧪 Current Status

This release is an second draft. It is mostly working and "straight to the point".
This is also my VERY FIRST WORKFLOW CONTRIBUTION on Civit.ai ! Please be gentle on your comments.

Next steps are :

  • Upscaling (coming very soon too)

  • Other way to improve quality


📎 Original Project Attribution

FramePack is originally developed by lllyasviel. This workflow wraps it in ComfyUI thanks to Kijai work and additional optimizations and user-friendly features.


🧠 Credits

  • @lllyasviel for the original FramePack architecture

  • @Kijai for the ComfyUI node wrapper

  • Comfy-Org for the models and pipeline integration

  • Everyone in the ComfyUI community for testing and feedback

The default settings were based on my RTX 3090 (24GB of vRAM). If you have less and you have memory usage, first change FramePack model to use fp8 model, then if it's not enough, try lowering VAE batch parameters.

Please, post all videos made with my workflow here, I really want to see what you are doing with it !