Type | Workflows |
Stats | 442 0 |
Reviews | (27) |
Published | Apr 24, 2025 |
Base Model | |
Hash | AutoV2 2D9AAD7493 |
Optimisation
To use torch.compile and sageattention, run those commands:
pip install triton-windows
pip install sageattention
You need to have MSVC tools and CUDA 12.6+ installed and set in PATH.
For portable also grab include_libs.zip from https://github.com/woct0rdho/triton-windows/releases/tag/v3.0.0-windows.post1 appropriate for your python version and unzip it into python_embeded folder.
To gain free easy speed bost, update comfyui to 0.3.30, pytorch to 2.7.0, if you didn't already, using this command
pip install -U torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128
then in your run.bat file add '--fast fp16_accumulation' parameter.
Fresh Comfy installs are shipped with this torch version, but you need to add parameter manually, and portable has run_nvidia_gpu_fast_fp16_accumulation.bat.
With both of those and Teacache with moderate settings generation time reduced by 65-70%, without TeCache - by 50-55%.
First launch
If you installed all dependencies and still get red sea of group nodes, first try to close workflow, close tab, restart comfy, open workflow again, then nodes should be detected. If issue persists, there is some conflict in nodepack versions, do this:
You have to make and rollback changes to apply preconfigured values - especially critical for HY nodes where default cfg=8, which will break it. Gotta do it with every node you're planning to use.
If you're wondering, why it can't be done in less frustrating way, remember - comfy is a lie.
A collection of workflows turned into group nodes to be used in a single space - to avoid constant jumping between workflows, saving/uploading videos and swapping values.
A workplace, rather than a workflow.
Generate and extend videos with different lengths and loras indefinitely, create loops, upscale.
Workflow seems massive and complex, until you realize you only have to use few nodes.
Use it, prune it, break it into pieces or just learn it's concepts.
What's in:
basic functions - T2V, I2V, start-to-end frame interpolation (F2's, FunInp, FLF2V) - all with separate V2V option
Control Loras for 1.3b - tile and depth
FunControl - to restyle video based on input image and it's ControlNet
Wrapper IV2V node - makes it possible to upscale to resolutions unavailible with native nodes; can be used as v2v after converting to nodes and removeng Start Frame related nodes
Skyreels V2 DF - seamless extention, vram-hungry, even though it's a wrapper
LTX infinite keyfame interpolation - neat feature that Wan lacks
HunYuan integration - T2V and I2V with corresponding V2Vs - to use with speedy 5-step AccVideo and HY loras, best performance with T2V.
v2.6 24.04.2025
Wrapper IV2V node - crank up blocks to swap and upscale to XL resolution
FLF2V - official start-to-end frame model that rocks
Skyreels V2 DF - video extention model that takes last frames of provided video and seamlessly extends it - pretty much FramePack, but Wan
v2.5 7.04.2025
Wan wrapper consolidated - separate workflows are likely better, keeping it as proof of concept
v2.1. 4.04.2025
InterpV2V nodes added.
v2. 3.04.2025
HunYuan - standard tools: T2V, V2V, I2V (including LeapFusion), IV2V (performs poorly)
introduction of Group Nodes
v1. 31.03.2025
initial release