santa hat
deerdeer nosedeer glow
Sign In

AnythingQingMix

2k
12.6k
254
Updated: Oct 5, 2024
styleanimefemalegaybarayaoi
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
9,222
Reviews
Published
Aug 29, 2023
Base Model
SD 1.5
Usage Tips
Clip Skip: 2
Hash
AutoV2
5178BF4B9B
default creator card background decoration
_Qing_'s Avatar
_Qing_

禁止使用此模型进行一切商务及违法行为,禁止随意转载,仅作为成果分享,违者后果自负!

(我的QQ群:235392155,Lora代练、ckpt融合调整加我qq:2402799912)

我在国内站tusi.artliblibai.com里也都有模型,按平台的创作激励会有所不同,还请多多关注支持!(我会再此模型的示例图中公示一部分的图出来)

【闲鱼】https://m.tb.cn/h.5V7ITvv?tk=eY01dB4UcnH

这是我的爱发电,补贴一下电费QAQ,感谢您的支持~

https://afdian.net/a/_Qing_

一、模型特征概述

V3:

一、更新操作:

1、利用MBW插件,通过穷举法融合了部分basil mix的输入层和中间层

2、利用MBW插件,通过穷举法以非常低的权重融合了部分模型的输出层

二、更新效果:

1、提高了tag的准确度

2、提高了饱和度,增强了画面的真实质感

3、提高了光影效果

4、提高了肢体的准确率

5、提高了角色lora的还原度

三、注意事项

1、建议使用vae:animevae,防止过度饱和(画廊示例图分别为此vae和84000)

2、对tag把握更准确也意味着对tag的书写要求会更高,若V3难上手可以尝试V2或V1版本

V2:

(若只喜欢成熟的肌肉男性或希望使用更简单可以尝试V1版本)

一、更新操作:

1、将NovelAI原版VAE替换了模型内VAE

2、穷举法更换了ckpt内的clip模型

3、与V1未融合lora版本进行融合

二、更新效果:

1、提升了tag的准确度

2、提升了构图质量

3、减轻了融合lora导致的形象固定

4、减轻了对于部分tag的过高权重

V1:

1、泛化型融合模型,能对应tag跑出多种画风,具有很高的跑图下限、tag准确性,并且从返图区还能看到他一大能力^v^

2、对人体塑造能力很强,肢体至今未遇到崩坏情况,手部、脚部崩坏几率很小

3、融合时特意避开对脸部的影响,所以脸部没有固化,很适合搭配角色lora

4、无clip偏移问题

5、因为unet有些过拟合导致某些tag不听使唤

二、跑图建议

(具体可以看封面图片的各类参数,不同VAE有不同的效果,我还喜欢开Face Editor插件)

1、因为融了相当一部分真人模,使用跑图时可以尝试和跑真人底膜一样的参数,但依然非常不建议使用面部修复

2、可以在跑图时加点质量词,它们是有用的。如果你希望他看起来更逼真立体,用一些与写实、光影相关的tag

3、可以多多尝试在同一tag下,clip跳过层为1和2的实际出图情况

4、方图、长图、宽图表现都不差,可以随意调整画布大小

5、尝试使用高分辨率修复,不建议使用面部修复

6、请多多点赞、返图、评论、5星哦~

三、融合采用模型

(我无法确定是否是这些,仅当参考)

VAE:orangemixa3.vae.pt

LORA:(总共权重为0.3)

CKPT:

四、给予访客使用模型的一些建议与回答

(English translation below)

(作为一个外行人,虽然没有经历过美术和ai的专业课学习,但我仍然在不断努力学习探索,并与各位大佬交流心得。我对自己在意的事情有着很高的精神洁癖,因此我希望能在自己的一亩三分地里传达正确的ai知识。这些在下方文档里都有,仅作为遇到的问题单独回答。)

1、LORA

  • 对于动漫角色lora而言,最好的权重便是1。虽然我们能通过改变权重的方式来改善拟合,但改变权重会额外产生一些不好影响。比如:c站动漫角色lora大多数都过拟合,而为了改善这一现象往往会推荐降低权重到0.6/0.8,而这一举动会导致角色失去部分原本的外貌特征

  • 过拟合通俗解释:过分训练导致的lora表现僵化,如人物不听tag话甚至出原本素材的图

2、CKP

  • clip偏移和unet过拟合都会导致模型不听tag话

  • clip偏移会导致tag识别问题,c站有相当一部分模型存在此问题而不自知,如果你感兴趣,在下面的“其他”里我会给予检查及简单修复的途径

  • unet过拟合也会让模型僵化,不听tag使唤,甚至在无tag的情况下跑出好看的图片,这也是过拟合吐原图的情况

  • 融合类ckpt能够很好的提高模型出图的下限,但融合模型对于tag的权重是很乱的,因此会各具“特色”,同样,基于此类模型训练的lora很难在其他模型上适用

  • 如果你希望大模型对于lora及其他模型的接纳程度更好就尽量不要再融合ckp模型时融合lora,或者和我一样在权衡之下融入以较低权重修正不尽人意的部分

3、VAE

  • CKP自身就有一个VAE,外挂VAE不是用于额外的增加,而是用于替换

  • VAE只是最直观的是饱和度变化,这并非VAE功能的全部,就对于跑图而言还会改变构图、细节等等

4、其他

  • 模型占内存大的不一定好,很多模型内有一堆无用数据,下载完全浪费流量内存

  • 示例图的好坏不能确定模型质量,除了基于作者本身审美外,你无法确定他用了多少模型、插件的辅助,你也无法确定他文生图跑了几次、图生图跑了几次

  • 下载量与点赞也不能确定模型质量,点赞量基于下载量,而下载量会和作者名气、封面图片受众、模型角色人气、模型画风受众、模型类型受众等等相关

  • 模型并非版本最新为最好,很多时候是基于某一版本的不同方向的调整。甚至有部分作者为了通过更新的推送而蹭到更多的下载量而恶意刷更新,实际使用下来模型的质量不过是在原地迂回踏步

  • 其他更为专业详细的知识请看万象熔炉 | Anything V5/Ink的简介部分,关于clip偏移的检验修复的扩展链接在此文档前段部分中

    模型链接:万象熔炉 | Anything V5/Ink - V3.2++[ink] | Stable Diffusion Checkpoint | Civitai

4. Suggestions and Answers for Model Usage by Visitors:

(As a layperson without formal education in art and AI, though I have been continuously learning and exploring, and exchanging experiences with various experts. I hold a strong commitment to accuracy in matters that concern me, and thus I aim to convey correct AI knowledge in my space. The details are found in the document below and are provided only for separately addressing encountered issues.)

1. LORA:

For anime character LORA, the ideal weight is 1. While we can improve fitting by adjusting weights, this can have additional undesirable effects. For instance: On certain image-sharing sites, many anime character LORAs are overfitted. To mitigate this, weight reduction to 0.6/0.8 is often recommended. However, this may result in characters losing some of their original features.

Simplified explanation of overfitting: Excessive training leads to stiff LORA performance, causing characters to not adhere to tags, or even generating images unrelated to the input material.

2. CKP:

Both clip offset and unet overfitting can cause models to not adhere to tags.

Clip offset leads to tag recognition issues. Many models on certain platforms have this problem unknowingly. If interested, I will provide ways to check and repair this in the "Other" section below.

Unet overfitting can also result in stiffness, not following tags, or even generating appealing images without any tags. This situation is when the overfitting produces images that are nearly identical to the original material.

Fusion-style ckpt models can enhance the lower limit of image generation quality. However, the tag weights in fusion models tend to be disorganized, resulting in diverse "characteristics." Similarly, LORAs trained based on such models are less compatible with other models.

If you want larger models to integrate LORA and other models better, avoid fusing LORA when merging ckp models. Alternatively, consider, as I have, incorporating underwhelming parts with lower weights in your fusion process.

3. VAE:

CKP itself has a VAE. External VAEs are meant to replace, not supplement, it.

The most immediate effect of a VAE is saturation alteration. However, this is not the entirety of its function; it can also impact composition, details, and more when generating images.

4. Other:

Models with large memory consumption aren't necessarily superior; many contain extraneous data, wasting bandwidth and memory.

The quality of sample images doesn't definitively determine model quality. Besides the author's aesthetic preferences, you cannot ascertain how many models, plugins, and iterations were involved. Download counts and likes are not definitive measures either; likes are often linked to downloads, which are influenced by the author's reputation, cover image appeal, character popularity, art style compatibility, and audience type.

Newer versions of models are not always superior. Often, adjustments are made based on different directions within a specific version. In fact, some authors maliciously exploit updates for increased downloads. In practice, the quality of the model only treads in place.

For more specialized and detailed knowledge, please refer to the introduction section of "万象熔炉 | Anything V5/Ink." Expanded links for examining and repairing clip offset issues are located in the earlier part of this document.