Type | |
Stats | 455 0 |
Reviews | (34) |
Published | Apr 9, 2025 |
Base Model | |
Hash | AutoV2 CDAB0DEAE9 |
This is a reupload of https://huggingface.co/alibaba-pai/Wan2.1-Fun-1.3B-InP including a fp8 conversion for people who can't run the 1.3b model in 16 bit precision.
Wan 2.1-Fun-1.3B-InP is an img2vid wan model at 1.3 billion parameters, it was trained by Alibaba-PAI. Initialized from the 1.3b t2v model. The weights are similar to the 14b i2v models, but with the size of the 1.3b model. Making it an easy to run, but still good quality i2v model. It was trained for start and end frame inpainting. Setting just a start frame allows it to do i2v. Wan 14b workflows
Lora training
If you want to use diffusion-pipe for lora training, you can use my fork. Make sure you're on the patch-1 branch. There's also an open pull request for it to be merged into the main repository.
git clone --recurse-submodules https://github.com/gitmylo/diffusion-pipe -b patch-1
The pr has been merged, so regular diffusion pipe can be used now:
git clone --recurse-submodules https://github.com/tdrussell/diffusion-pipe