santa hat
deerdeer nosedeer glow
Sign In





  • 对于动漫角色lora而言,最好的权重便是1。虽然我们能通过改变权重的方式来改善拟合,但改变权重会额外产生一些不好影响。比如:c站动漫角色lora大多数都过拟合,而为了改善这一现象往往会推荐降低权重到0.6/0.8,而这一举动会导致角色失去部分原本的外貌特征

  • 过拟合通俗解释:过分训练导致的lora表现僵化,如人物不听tag话甚至出原本素材的图


  • clip偏移和unet过拟合都会导致模型不听tag话

  • clip偏移会导致tag识别问题,c站有相当一部分模型存在此问题而不自知,如果你感兴趣,在下面的“其他”里我会给予检查及简单修复的途径

  • unet过拟合也会让模型僵化,不听tag使唤,甚至在无tag的情况下跑出好看的图片,这也是过拟合吐原图的情况

  • 融合类ckpt能够很好的提高模型出图的下限,但融合模型对于tag的权重是很乱的,因此会各具“特色”,同样,基于此类模型训练的lora很难在其他模型上适用

  • 如果你希望大模型对于lora及其他模型的接纳程度更好就尽量不要再融合ckp模型时融合lora,或者和我一样在权衡之下融入以较低权重修正不尽人意的部分


  • CKP自身就有一个VAE,外挂VAE不是用于额外的增加,而是用于替换

  • VAE只是最直观的是饱和度变化,这并非VAE功能的全部,就对于跑图而言还会改变构图、细节等等


  • 模型占内存大的不一定好,很多模型内有一堆无用数据,下载完全浪费流量内存

  • 示例图的好坏不能确定模型质量,除了基于作者本身审美外,你无法确定他用了多少模型、插件的辅助,你也无法确定他文生图跑了几次、图生图跑了几次

  • 下载量与点赞也不能确定模型质量,点赞量基于下载量,而下载量会和作者名气、封面图片受众、模型角色人气、模型画风受众、模型类型受众等等相关

  • 模型并非版本最新为最好,很多时候是基于某一版本的不同方向的调整。甚至有部分作者为了通过更新的推送而蹭到更多的下载量而恶意刷更新,实际使用下来模型的质量不过是在原地迂回踏步

  • 其他更为专业详细的知识请看万象熔炉 | Anything V5/Ink的简介部分,关于clip偏移的检验修复的扩展链接在此文档前段部分中

    模型链接:万象熔炉 | Anything V5/Ink - V3.2++[ink] | Stable Diffusion Checkpoint | Civitai

(As a layperson without formal education in art and AI, though I have been continuously learning and exploring, and exchanging experiences with various experts. I hold a strong commitment to accuracy in matters that concern me, and thus I aim to convey correct AI knowledge in my space. The details are found in the document below and are provided only for separately addressing encountered issues.)

1. LORA:

For anime character LORA, the ideal weight is 1. While we can improve fitting by adjusting weights, this can have additional undesirable effects. For instance: On certain image-sharing sites, many anime character LORAs are overfitted. To mitigate this, weight reduction to 0.6/0.8 is often recommended. However, this may result in characters losing some of their original features.

Simplified explanation of overfitting: Excessive training leads to stiff LORA performance, causing characters to not adhere to tags, or even generating images unrelated to the input material.

2. CKP:

Both clip offset and unet overfitting can cause models to not adhere to tags.

Clip offset leads to tag recognition issues. Many models on certain platforms have this problem unknowingly. If interested, I will provide ways to check and repair this in the "Other" section below.

Unet overfitting can also result in stiffness, not following tags, or even generating appealing images without any tags. This situation is when the overfitting produces images that are nearly identical to the original material.

Fusion-style ckpt models can enhance the lower limit of image generation quality. However, the tag weights in fusion models tend to be disorganized, resulting in diverse "characteristics." Similarly, LORAs trained based on such models are less compatible with other models.

If you want larger models to integrate LORA and other models better, avoid fusing LORA when merging ckp models. Alternatively, consider, as I have, incorporating underwhelming parts with lower weights in your fusion process.

3. VAE:

CKP itself has a VAE. External VAEs are meant to replace, not supplement, it.

The most immediate effect of a VAE is saturation alteration. However, this is not the entirety of its function; it can also impact composition, details, and more when generating images.

4. Other:

Models with large memory consumption aren't necessarily superior; many contain extraneous data, wasting bandwidth and memory.

The quality of sample images doesn't definitively determine model quality. Besides the author's aesthetic preferences, you cannot ascertain how many models, plugins, and iterations were involved. Download counts and likes are not definitive measures either; likes are often linked to downloads, which are influenced by the author's reputation, cover image appeal, character popularity, art style compatibility, and audience type.

Newer versions of models are not always superior. Often, adjustments are made based on different directions within a specific version. In fact, some authors maliciously exploit updates for increased downloads. In practice, the quality of the model only treads in place.

For more specialized and detailed knowledge, please refer to the introduction section of "万象熔炉 | Anything V5/Ink." Expanded links for examining and repairing clip offset issues are located in the earlier part of this document.