https://github.com/TkskKurumi/DiffusersFastAPI/blob/main/model_as_lora.py
I've attempted to develop an algorithm that can calculate LoRA from subtracting two trained full model. The core is to compress the full Δw (weight difference) matrix into LoRA.
I think my matrix compression algorithm is not good enough. I'm not good at linear algebra (or I don't even know what fields of knowledge is needed for the task).
Compress matrix W of shape(m, n) into matul of A and B of shapes (m, rank) and (rank, n). The compress ratio according to parameter amount is rank*(m+n)/(m*n). I've found that ratio = 25% may nearly represent Fantexi subtract my anime model. And ratio=5% with lora alpha doubled has adequate result.
I've also tried the algorithm to take image as input and use 2 matrixs of shapes (width, rank) (rank, height). Following image has original 1080 pixels width and height and compressed into low rank 32.