Beauty of rain [Wan 2.2 T2V-A14B]
https://civitai.com/models/1747192?modelVersionId=2462055
Trigger Word: b3@ut1f0ll_r@in
Model: Wan 2.2 t2v A14B
All examples are generated with 1.0 strength with resolution 848 х 480
Diffusion-pipe was used for training.
Historical background
I previously trained loras for wan 2.2. The first experimental lora was [Hollow knight] and my main goal there was just to train and test on lighting-4-step. Since the dataset was almost completely static, there weren't many problems. Then I trained the comic lora [OIIA], where I didn't care about artifacts, the most important thing was that the cats were spinning. Then I got a very cool HIGH lora, which spinning very realistic cats. It was only after that that I remembered my old lora [Beauty of rain] for wan 2.1 and decided to train her for 2.2. Moreover, my main motivation was to keep the style as close as possible to 2.1. And that's where the problems started...
Features of Lora's training for Wan 2.2
The following are just my observations that arose while studying on this particular dataset. You can read more about the dataset itself in this article.
Different combinations of HIGH and LOW give different results
Previously, I didn't worry too much, and the learning principle was the same as for wan 2.1. Only instead of one version, I trained two sequentially (high and low) with the same parameters and just looked at the results on the same epochs. A typical list of versions was as follows:
high epoch 10 + low epoch 10
high epoch 20 + low epoch 20
...
high epoch 90 + low epoch 90But it stopped working for Beauty of rain. If you follow this principle, the result was too realistic with a rapid degradation of quality. Below and further examples for rank 32 lr=5e-05.

As far as I understand, this is due to the fact that the low version is much faster to retrain than the high version. And in a good way, you need to take them from different epoch. After that, I decided to check what would happen if I loaded only high:

or only the low version in wan 2.2:

The high version has a stronger effect on the result, but there has never been such a strong degradation in quality as before. Obviously, only the combination of the two lora broke the visualization. And for the best result, you need to sort through their different combinations 😵💫 Moreover, both combinations of epochs and combinations of learning parameters.
I have trained 6 versions for this dataset:
1. HIGH rank 32, lr = 5e-05
2. HIGH rank 32, lr = 7e-05
3. HIGH rank 32, lr = 1e-04
4. LOW rank 32, lr = 5e-05
5. LOW rank 32, lr = 7e-05
6. LOW rank 32, lr = 1e-04By saving every 6 epochs, I got 16 files with lora for each version. Even without the lora combinations, it was clearly visible that lr=1e-04 was no good, so I threw it out and there were 4 versions left. And I decided to stupidly go through all the low-resolution combinations head-on on one particular promt 🤡 which I highly recommend that you do NOT DO. Later, I disabled this script after it generated 956 small videos... I didn't have the moral strength to watch them all honestly, in the end I just looked at the preview of the video to evaluate it. After that, I more or less understood the range of lora whose combinations fall into the style that I want. There were about 18 such combinations. Well, after that, I just started looking at how these combinations behave on other prompts and chose the best one.
Lora may not work on some resolutions
It is an absolute norm for diffusion neural networks to show different results at different resolutions with the same prompt and seed. But still, here's the rule that worked for wan 2.1. If you select the training parameters for 192p, then the same parameters will most likely work for 720p. Well, the result on different resolutions will be in the same style, so you could safely train something on 480p, and conduct inference on 720p. If lora was trained correctly, then the style was preserved.
Something strange is happening with wan 2.2. If some parameters behave well during low-resolution training, they can also lead to severe overfitting at high resolution. Due to GPU limitations, I cannot train at the native HD resolution, so I use several smaller resolutions. Lora hasn't seen HD, so if you take her and make an HD inference with her, you probably won't see much of any effect. This is very strange, because it seems to me that I have found a good combination of high-low loras and they should work on HD. But for some reason this does not happen. Also, when multiple resolutions are used during training, artifacts appear, a very small symmetrical splashes of water. They look extremely unnatural, but I couldn't get rid of them completely. Because in those cases when they weren't visible the style was already too realistic. They are somewhat similar to the artifacts that occur on the 5B version when learning at non-standard resolutions. Hmm, what if the weights were fine-tuned to HD resolution for wan 2.2? Who the hell knows why this happens, maybe I'm just at the beginning of learning the fine art of learning lora for wan 2.2 and don't know much.
Сonclusion
Lora is trained for wan 2.2, although she is not perfect. I couldn't get rid of this white noise, but I was really fucked up with this lora, so I decided to publish lora in the state it is in now. Here is its training loss. You can find detailed training parameters in the "About this version" section on the model's page.


![Beauty of rain [Wan 2.2 T2V-A14B]](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/64692d06-a509-41a2-8aca-e99f98cbad51/width=1320/vlcsnap-2025-12-01-20h13m30s882.jpeg)