santa hat
deerdeer nosedeer glow
Sign In

Some Shallow Understanding of Lora Training 一些关于lora训练的浅薄理解

Some Shallow Understanding of Lora Training 一些关于lora训练的浅薄理解

Since there are lots of skilled people in lora training who shared their excellent and complete tutorials, in this article I just would like to share some shallow understanding of lora training.

既然有很多大佬分享了他们完备而出色的教程,在这篇文章里我只想分享一些lora训练的浅薄理解。

Firstly, it's not easy to judge a lora. But in my view, when the user is using a lora, he/she would not like to memorize lots of instructions, including suggested strength. So, basically a good lora had better be usable and not ruin the style of the ckpt. at strength 1 when you use it without other loras. But as for some concepts that are quite rare and hard to train, we should not judge too strictly.

首先判定lora好坏并非易事。但依愚之见,用户并不喜欢记忆纷繁复杂的使用说明,包括建议的权重。因而,基本上一个好lora最好是在单独使用时,权重1即可生效且不会弄崩画风。但对于那些世所罕见且难以训练的概念来说,也别过分苛责。

The quality of datasets is very, very important. Do not choose images that are ugly or wrong as you can as possible. If you are training a lora with asymmetric features, make sure that all of these features are at the same side. Do not think wow you will get twice datasets and mirror your datasets!

训练集的质量至关重要。非不得已不要选取那些丑陋错谬的图片。如果你在训练一个有不对称特征的lora,确保不对称的特征在同一侧。不要为了翻倍你的数据集就把它们镜像处理!

Someone might think that backgrounds of datasets should be removed. Well, that is not very necessary as long as you tag them well. When I talk about "not very necessary", I'm not to say "you should never remove backgrounds!". Just be flexible. If some backgrounds are hard to tag——as they are novel or strange——you can tag them with "outdoors" or "indoors", or use block weight training and lower OUT07~09.

有的人可能认为数据集背景应该去掉。如果标记清楚,倒也不是必要之举。当我说“并非必要”时,我可不是说“绝对不要去掉背景”。灵活变通即可。如果一些背景因为新奇古怪难以打标,你可以加上outdoors或是indoors的标,或是使用分层训练并降低OUT07~09的权重。

Keep in mind what you want to train and make "what your do not want to train" in datasets as different as possible. Take a character lora for example. If you want to retain the original, official style of this character, you can use lots of screenshots from TV series/game of course. But if you want to see her/him in different styles, you 'd better not use too much screenshots. What's worse, some old TV/game's screenshots are quite blurry, and your lora will learn that blurry style if most of datasets are these screenshots.

搞清楚你要练什么,不想练的东西在数据集中尽可能不一样。以角色lora为例。如果你想保留角色本来的风味,你当然可以用一堆动画游戏截图。但如果你想要让ta以不同风格出现,最好别用太多截图。更糟糕的可能是一些老番老游戏的截图都挺模糊,如果你的训练集大都是这种截图,你的lora也会把模糊的风格学进去。

If the character/costume has some important details you want the AI to paint it as accurate as possible, you can take some close-ups as part of datasets. But remember to tag "close-up" "lower body" "xx focus"……and keep them at a not-so-high percent. Otherwise the AI will be more likely to give you close-up images.

如果角色服装有一些你想让AI尽可能精确地复刻的细节,你可以拿一些局部特写当训练集。但是记得加上"close-up" "lower body" "xx focus"这类标,并且不要让它们占比过高。否则AI会更倾向于给你特写图。

Use different folders to control the ratio of different concepts. You might think that folders can help to train different concepts seperately but ended up with finding them mixed up. Actually, datasets in different folders are still trained together. Folders are used for sorting and controlling proportions of datasets. For example, the character has two outfits, and you have 20 images for outfit-A and 30 images for outfit-B. You can put the former in 6_outfitA and the latter in 4_outfitB, so that the amounts of them are balanced, lowering the possibility of B polluting A. Also, you can use this trick to give high-quality/official art datasets more repeats, or lower the possibility of get a overfitting lora by give some datasets fewer repeats, especially when you want to put SD-generated images in datasets——lora's learning SD-generated images is much, very much more faster that normal images if you do not proprocess them!

使用不同的文件夹来控制不同概念的比例。你可能曾以为分了夹子就能分开训练,结果发现特征全都混起来了。实际上不同文件夹里的训练集仍然是一起训练的。文件夹是用来分类和控制比例的。举例来说一个角色有两套衣服,服装A你有20张图,服装B有30张图,你可以把前者放在6_outfitA里而后者放在4_outfitB,从而达到数量平衡,减少了B污染A的概率。此外,你还能用这个技巧来提高高质量图/官设的循环次数,或者是给一些训练集更低的循环次数来减少过拟合的可能,特别是当你想把SD生成的AI图放进去的时候——如果你不处理AI图的话,它们拟合的速度比其他图片快非常非常多!

46

Comments