Update 4-3-2025:
city96 has published a full set of ggufs for this model you can find here if you are looking for a different size.
https://huggingface.co/city96/Wan2.1-Fun-14B-Control-gguf/tree/main
____
This is a q5_k_m of the fun control model that you can find here:
https://huggingface.co/alibaba-pai/Wan2.1-Fun-14B-Control/blob/main/README_en.md
This model allows you to use pose, depth, or other info to guide your video (see the hugging face page for details.)
I made it using instructions from here:
https://github.com/city96/ComfyUI-GGUF/tree/convert_refactor_new
I've tested it on a few videos and it seems to work and is much faster on my 3090 than using the much larger fp8 quantized by kijai that you can find here:
https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2.1-Fun-Control-14B_fp8_e4m3fn.safetensors
You'll need to use the WanFunControlToVideo node in comfy core.