Type | |
Stats | 1,222 0 |
Reviews | (184) |
Published | Jun 5, 2025 |
Base Model | |
Hash | AutoV2 F9DCDA1AF2 |
π Wan2.1_14B_FusionX β Merged models for Faster, Richer Motion & Detail in as little as 8 steps!
π Important Details- Please read the full description below because small changes to settings will provide totally different results in a bad way! I have been testing and already found better settings so just please read below! Thank you :)
π‘Workflows can be found HERE (This is a wip and more will be added soon.)
π οΈUpdates section has been moved to the end of the description.
A high-performance text-to-video model built on top of the base WAN 2.1 14B T2V model β carefully merged with multiple research-grade models to enhance motion quality, scene consistency, and visual detail, comparable to some of the many close source models.
## π’ Join The Community!
We're building a friendly space to chat, share creations, and get support. I am also adding a channel to include some good motion lora's to help to get more motion in your i2v video's and I'll be adding other goodies here so please join us :)
π Click here to join the Discord!
Come say hi in #welcome
, check out the rules, and show off your creations! π¨π§
π‘ Whatβs Inside this base model:
π§ CausVid β Causal motion modeling for better scene flow and dramatic speed boot
ποΈ AccVideo β Improves temporal alignment and realism along with speed boot
π¨ MoviiGen1.1 β Brings cinematic smoothness and lighting
𧬠MPS Reward LoRA β Tuned for motion dynamics and detail
β¨ Custom LoRAs (by me) β Focused on texture, clarity, and facial details.
π₯ Highlights:
π Accepts standard prompt + negative prompt setup
π Tuned for high temporal coherence and expressive, cinematic scenes
π Drop-in replacement for WAN 2.1 T2V β just better
π Renders up to 50% faster than the base model (especially with SageAttn enabled)
π§© Fully compatible with VACE
π§ Optimized for use in ComfyUI, especially with the Kaji Wan Wrapper
π Important Details for text to video:
π§ CGF must be set to 1 β anything higher will not provide acceptable results.
π§ Shift - Results can vary based on Resolution. 1024x576 should start at 1 and if using 1080x720 start at 2. Note: For more realism lower shift values is what you need. If your looking for a more stylized look then test higher shift values between 3-9
Scheduler: Most of my examples used Uni_pc but you can get different results using others. Is really all about experimenting. I noticed depending on the prompt that the flowmatch_causvid works well too and helps with small details.
π Important Details for image to video:
π§ CGF must be set to 1 β anything higher will not provide acceptable results.
π§ Shift - For image to video I found that 2 is best but you can experiment.
Scheduler: Most of my examples used dmp++_sde/beta and seems to work best but you can experiment.
After testing, to get more motion and reduce the slow-mo look, set your frame count to 121 and frames per second to 24. This can provide up to a 50% motion speed boost.
πOther Important Details:
β‘ Video generation works with as few as 6 steps, but 8β10 steps yield the best quality. Lower steps are great for fast drafts with huge speed gains.
π§© Best results using the Kaji Wan Wrapper custom node:
https://github.com/kijai/ComfyUI-WanVideoWrapperπ§ͺ Also tested with the native WAN workflow, generation time is a bit longer but results match wrapper.
β Do not re-add CausVid, AccVideo, or MPS LoRAs β theyβre already baked into the model and may cause unwanted results.
π¨ You can use other LoRAs for additional styling β feel free to experiment.
π½οΈ All demo videos were generated at 1024x576, 81 frames, using only this model β no upscaling, interpolation, or extra LoRAs.
π₯οΈ Rendered on an RTX 5090 β each video takes around 138 seconds with the listed settings.
π§ If you run out of VRAM, enable block swapping β start at 5 blocks and adjust as needed.
π SageAttn was enabled, providing up to a 30% speed boost. (Wrapper only)
Workflows for each model can be found here: HERE
π« Do not use teacache β itβs unnecessary due to the low step count.
π βEnhance a videoβ and βSLGβ features were not tested β feel free to explore on your own. -- Edit. I did test "Enhance a video" and you can get more vibrant results with this turned on. Settings between 2-4. Experiment! SLG has not been tested much.
π¬ Have questions? Youβre welcome to leave a message or join the community:
π Click here to join the Discord!
π Want better prompts? All my example video prompts were created using this custom GPT:
π¬ WAN Cinematic Video Prompt Generator
Try asking it to add extra visual and cinematic details β it makes a noticeable difference.
β οΈ Disclaimer:
Videos generated using this model are intended for personal, educational, or experimental use only, unless youβve completed your own legal due diligence.
This model is a merge of multiple research-grade sources, and is not guaranteed to be free of copyrighted or proprietary data.
You are solely responsible for any content you generate and how it is used.
If you choose to use outputs commercially, you assume all legal liability for copyright infringement, misuse, or violation of third-party rights.
When in doubt, consult a qualified legal advisor before monetizing or distributing any generated content.
### π§ More GGUF Variants
- πΌοΈ [FusionX Image-to-Video (GGUF)]
- π₯ [FusionX Text-to-Video (GGUF)]
- ποΈ [FusionX T2V VACE GGUF (for native)]
- π» [FusionX Phantom GGUF (for native)]
###π§ fp16 Versions can be found here:
-- πΌοΈfp16 FusionX Models
πgguf comparisons!
I'm slowly adding to this list, but here you can see how the models compare against the main model.
Text to video:
--------
π οΈUpdate 6/8/2025 - Image to video model is published! Settings that I use in the example videos: Steps = 10 / cfg = 1 / shift = 2 / schedular = dmp++_sde i'll post a WF soon.
π οΈUpdate 6/7/2025 - Published a i2v phantom model that can take up to 4 reference images and combine them into a video. Posting workflow soon
Phantom WF is getting uploaded soon.
π οΈUpdate 6/6/2025 - Added a new gguf model! If you want the highest quality and have enough VRAM get the V1.0 model otherwise gguf is the next best thing! When using the gguf's it will take longer to generate even on an RTX 5090.