Type | Workflows |
Stats | 3,211 |
Reviews | (63) |
Published | Mar 2, 2025 |
Base Model | |
Hash | AutoV2 EFD10BED12 |
UPDATE:
-This uses the new Teacache and TorchCompilers I am gettig about 40-50% speed increase for my renders I will show more results soon.
-Quality of life: Organised with notes
-Controller Coming soon
This is a workflow i made in comfy ui it can run
Wan2.1 T2V
Wan2.1 I2V
On just a RTX3050 Laptop edition 4gb Vram + 16gb ram
I am a beginner but here is what I did:
Used the GGUF custom nodes and models
Nodes: (or use comfyui manager to install custom nodes)
https://github.com/calcuis/gguf
https://github.com/kijai/ComfyUI-WanVideoWrapper
https://github.com/FlyingFireCo/tiled_ksampler
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite
https://github.com/rgthree/rgthree-comfy
Only for controller or Stack Lora Loader
only models
Models:https://huggingface.co/calcuis/wan-gguf
I recommend using the Q_5_K_M versions its fast but stil accurate from my testing.
I am still getting usuable results with Q_2_K
install the vae,clip,clip vision,Text encoder from here
https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files
You should also use tools like Xformer,sageatten,triton or settings to speed it up a bit.
Specs:
It uses Tiled Ksampler tiled vae decoder everything tiled and about 480p quality but then using an upscaller i can get a pretty good result.
Now to the results:
V1.0
Generation for 53 frames at 480x848 plus an upscaller to 1080p at 25 steps took 5.32 hours
V1.5
Generation for 27 frames at 480x848 plus an upscaller to 1080p at 30 steps took 15minutes
If you want me to I Will make some proper tests and results
Notes:
If you want to do text use a hunyuan empty latent video