Sign In

Rebasing your Lora to have recommended weight = 1.0

Rebasing your Lora to have recommended weight = 1.0

You can rebase your Lora to have weight 1

Lora training asks you to input a network_dim and a network_alpha. What do both parameters do ? Looking at videos online, it's hard to explain but there are recommended values.

It turns out network_dim is the matrix dim (128 for example) while alpha is just a scaling coefficient. The whole update is multiplied by network_alpha / network_dim (param called 'scale' in lora.py). So setting network_alpha to half of dim means you expect your Lora to be best at weight 0.5 and scale it in advance, while network_alpha==network_dim means you expect weight==1.0 and pre-scale it. This would only affect the weight of the random init (lora uses the init recommended in the original paper), set the network_alpha lower if the first few iterations of your Lora look completely mangled.

type 'python' to open a python console in your SD environment

from safetensors import safe_open
from safetensors.torch import save_file

YOUR_INPUT_FILE = 'init_train_2-000224.safetensors'
YOUR_OUTPUT_FILE = 'init_train_2-000224-rebase.safetensors'
YOUR_RECOMMENDED_WEIGHT = 0.4

weights = safe_open(YOUR_INPUT_FILE, framework="pt")
len(weights.keys()) #number of tensors, should be 792
len([k for k in weights.keys() if k.endswith(".alpha")]) #number of multipliers, should be 264 = 792/3

weights_load = {k: (weights.get_tensor(k)*YOUR_RECOMMENDED_WEIGHT if k.endswith('.alpha') else weights.get_tensor(k)) for k in weights.keys()}

save_file(weights_load, YOUR_OUTPUT_FILE)

exit() #exit python console

You can now do a prompt matrix in stable diffusion to check that your old Lora * coeff and the new Lora with weight 1.0 give the same result. (it 'could' give a different result because of machine precision, but the result should be indistinguishable in most cases)

21

Comments