Sign In

Wan2.1_14B_FusionX

245
2.8k
95
Updated: Jun 12, 2025
base modelmergewani2vt2vwan14b
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
1,222
0
Reviews
Published
Jun 5, 2025
Base Model
Wan Video 14B t2v
Hash
AutoV2
F9DCDA1AF2
License:

πŸŒ€ Wan2.1_14B_FusionX β€” Merged models for Faster, Richer Motion & Detail in as little as 8 steps!

πŸ“Œ Important Details- Please read the full description below because small changes to settings will provide totally different results in a bad way! I have been testing and already found better settings so just please read below! Thank you :)

πŸ’‘Workflows can be found HERE (This is a wip and more will be added soon.)

πŸ› οΈUpdates section has been moved to the end of the description.

A high-performance text-to-video model built on top of the base WAN 2.1 14B T2V model β€” carefully merged with multiple research-grade models to enhance motion quality, scene consistency, and visual detail, comparable to some of the many close source models.

## πŸ“’ Join The Community!

We're building a friendly space to chat, share creations, and get support. I am also adding a channel to include some good motion lora's to help to get more motion in your i2v video's and I'll be adding other goodies here so please join us :)

πŸ‘‰ Click here to join the Discord!

Come say hi in #welcome, check out the rules, and show off your creations! 🎨🧠

πŸ’‘ What’s Inside this base model:

  • 🧠 CausVid – Causal motion modeling for better scene flow and dramatic speed boot

  • 🎞️ AccVideo – Improves temporal alignment and realism along with speed boot

  • 🎨 MoviiGen1.1 – Brings cinematic smoothness and lighting

  • 🧬 MPS Reward LoRA – Tuned for motion dynamics and detail

  • ✨ Custom LoRAs (by me) – Focused on texture, clarity, and facial details.


πŸ”₯ Highlights:

  • πŸ“ Accepts standard prompt + negative prompt setup

  • πŸŒ€ Tuned for high temporal coherence and expressive, cinematic scenes

  • πŸ” Drop-in replacement for WAN 2.1 T2V β€” just better

  • πŸš€ Renders up to 50% faster than the base model (especially with SageAttn enabled)

  • 🧩 Fully compatible with VACE

  • 🧠 Optimized for use in ComfyUI, especially with the Kaji Wan Wrapper


πŸ“Œ Important Details for text to video:

  • πŸ”§ CGF must be set to 1 β€” anything higher will not provide acceptable results.

  • πŸ”§ Shift - Results can vary based on Resolution. 1024x576 should start at 1 and if using 1080x720 start at 2. Note: For more realism lower shift values is what you need. If your looking for a more stylized look then test higher shift values between 3-9

  • Scheduler: Most of my examples used Uni_pc but you can get different results using others. Is really all about experimenting. I noticed depending on the prompt that the flowmatch_causvid works well too and helps with small details.

πŸ“Œ Important Details for image to video:

  • πŸ”§ CGF must be set to 1 β€” anything higher will not provide acceptable results.

  • πŸ”§ Shift - For image to video I found that 2 is best but you can experiment.

  • Scheduler: Most of my examples used dmp++_sde/beta and seems to work best but you can experiment.

  • After testing, to get more motion and reduce the slow-mo look, set your frame count to 121 and frames per second to 24. This can provide up to a 50% motion speed boost.

πŸ“ŒOther Important Details:

  • ⚑ Video generation works with as few as 6 steps, but 8–10 steps yield the best quality. Lower steps are great for fast drafts with huge speed gains.

  • 🧩 Best results using the Kaji Wan Wrapper custom node:
    https://github.com/kijai/ComfyUI-WanVideoWrapper

  • πŸ§ͺ Also tested with the native WAN workflow, generation time is a bit longer but results match wrapper.

  • ❗ Do not re-add CausVid, AccVideo, or MPS LoRAs β€” they’re already baked into the model and may cause unwanted results.

  • 🎨 You can use other LoRAs for additional styling β€” feel free to experiment.

  • πŸ“½οΈ All demo videos were generated at 1024x576, 81 frames, using only this model β€” no upscaling, interpolation, or extra LoRAs.

  • πŸ–₯️ Rendered on an RTX 5090 β€” each video takes around 138 seconds with the listed settings.

  • 🧠 If you run out of VRAM, enable block swapping β€” start at 5 blocks and adjust as needed.

  • πŸš€ SageAttn was enabled, providing up to a 30% speed boost. (Wrapper only)

  • Workflows for each model can be found here: HERE

  • 🚫 Do not use teacache β€” it’s unnecessary due to the low step count.

  • πŸ” β€œEnhance a video” and β€œSLG” features were not tested β€” feel free to explore on your own. -- Edit. I did test "Enhance a video" and you can get more vibrant results with this turned on. Settings between 2-4. Experiment! SLG has not been tested much.

  • πŸ’¬ Have questions? You’re welcome to leave a message or join the community:

    • πŸ‘‰ Click here to join the Discord!

  • πŸ“ Want better prompts? All my example video prompts were created using this custom GPT:
    🎬 WAN Cinematic Video Prompt Generator
    Try asking it to add extra visual and cinematic details β€” it makes a noticeable difference.


⚠️ Disclaimer:

  • Videos generated using this model are intended for personal, educational, or experimental use only, unless you’ve completed your own legal due diligence.

  • This model is a merge of multiple research-grade sources, and is not guaranteed to be free of copyrighted or proprietary data.

  • You are solely responsible for any content you generate and how it is used.

  • If you choose to use outputs commercially, you assume all legal liability for copyright infringement, misuse, or violation of third-party rights.

When in doubt, consult a qualified legal advisor before monetizing or distributing any generated content.


### 🧠 More GGUF Variants

- πŸ–ΌοΈ [FusionX Image-to-Video (GGUF)]

- πŸŽ₯ [FusionX Text-to-Video (GGUF)]

- 🎞️ [FusionX T2V VACE GGUF (for native)]

- πŸ‘» [FusionX Phantom GGUF (for native)]


###🧠 fp16 Versions can be found here:

-- πŸ–ΌοΈfp16 FusionX Models



πŸ“Œgguf comparisons!
I'm slowly adding to this list, but here you can see how the models compare against the main model.

Text to video:

--------

πŸ› οΈUpdate 6/8/2025 - Image to video model is published! Settings that I use in the example videos: Steps = 10 / cfg = 1 / shift = 2 / schedular = dmp++_sde i'll post a WF soon.

πŸ› οΈUpdate 6/7/2025 - Published a i2v phantom model that can take up to 4 reference images and combine them into a video. Posting workflow soon

Phantom WF is getting uploaded soon.

πŸ› οΈUpdate 6/6/2025 - Added a new gguf model! If you want the highest quality and have enough VRAM get the V1.0 model otherwise gguf is the next best thing! When using the gguf's it will take longer to generate even on an RTX 5090.