Type | |
Stats | 718 |
Reviews | (117) |
Published | Jan 6, 2024 |
Base Model | |
Training | Epochs: 210 |
Usage Tips | Clip Skip: 2 |
Hash | AutoV2 083A83ADAE |
Please use sd-webui-additional-networks to load this model !
请使用 sd-webui-additional-networks 来加载本模型!
Introduction 简介
Q0: What is this model? 这个模型是什么?
A0: 🎈 This is a stabilizer trained for Anything V5. 这是一个为 Anything V5 训练的“稳定器”。
Q1: What's the use of this model? 这个模型有什么用?
A1.1: 🎉 This model can to some extent fix the collapsed structure in the generated illustration. 该模型能够一定程度上修复画面的崩坏部分。
A1.2: 🎨 This model can fix the pupils of the generated characters. 该模型能够修复生成人物的瞳孔。
A1.3: 😎 This model modifies the lightning effects of the generated illustrations, so that they will be less look like generated by AI (i.e., prefered by human). 该模型会调整生成画像的光影效果,使之看起来“没那么像是AI生成的” (换言之,被人类偏好)。
A1.4: 🍡 This model seems to better align the generated image with the input prompts. 该模型似乎能够更好地对齐生成的图片与输入的提示词。
Q2: How is this model Trained? 这个模型是如何训练的?
A2.1: 🥰 This model is trained by Reinforcement Learning from Human Feedback (RLHF), which has been widely used in LLMs. 该模型由人类反馈强化学习技术训练,该技术已被广泛应用于微调LLMs。
A2.2: 😏 To be specific, this model is trained by D3PO, see Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model (arxiv 2311.13231). 具体而言,该模型的训练基于D3PO方法,详见Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model (arxiv 2311.13231)。
A2.3: 🙄 This model is instructed by a self-trained reward model in the training process, using 4 metrics (likes, collections, AI-generated probability, and views). The reward model is available at chikoto/ConvNeXtV2-IllustrationScorer. 该模型在训练时由一个自己训练的奖励模型指导,使用 点赞数,收藏数,由AI生成的概率,以及浏览量 四个指标来评价。该奖励模型可以在 chikoto/ConvNeXtV2-IllustrationScorer 找到。
Some Nonsense Crap 一些无关紧要的东西
🧐 Any idea and suggestion is welcomed! 欢迎任提供何想法和建议!🧐
🤗 Welcome to challenge NAI3 with a mortal body. 欢迎各位以凡人之躯挑战NAI3 :) 🤗