This is just a simple modified version of the official ComfyUI Wan 2.2 5B workflow. I just added the GGUF model loader and clear VRAM nodes at different stages of the generation to help prevent OOM on machines that don't have a lot of VRAM. Using a Q8 quant of Wan 2.2 5b, generating at 704x1024 33 frames peaks at 11.8gb of VRAM. If you have a lower-end machine, reduce the quant level to a Q5 or Q4 quant. Quantstack has some quants of Wan 2.2 5B on their huggingface.