Type | |
Stats | 270 710 |
Reviews | (48) |
Published | Jan 4, 2024 |
Base Model | |
Training | Steps: 14,280 Epochs: 15 |
Usage Tips | Clip Skip: 2 |
Trigger Words | kobayashi_kobayashisanchinomaidragon glasses, red_hair, blush, ponytail, bangs, closed_mouth, brown_eyes |
Hash | AutoV2 44B465A6D5 |
- Thanks to Civitai's bewildering and frustrating TOS, some images may be hidden. THE FULL PREVIEW IMAGES CAN BE FOUND ON HUGGINGFACE.
- THIS MODEL HAS TWO FILES. YOU NEED TO USE THEM TOGETHER!!!
- The associated trigger words are only for reference, it may need to be adjusted at some times.
- The recommended weight for the embedding model is 1, which provides higher fidelity; if greater generalization is required, it can be lowered to 0.5.
- The recommended weight for the LoRA model is 0.85; if there's evidence of contamination, consider lowering it to 0.5.
- The preview images were generated using a few fixed test prompts and several prompts derived from clustering dataset features. Random seeds were used, ruling out cherry-picking. What you see is what you get.
- No specialized training was done for outfits. You can check our provided preview post to get the prompts corresponding to the outfits.
How to Use This Model
THIS MODEL HAS TWO FILES. YOU NEED TO USE THEM TOGETHER!!!.
In this case, you need to download both kobayashi_kobayashisanchinomaidragon.pt
and
kobayashi_kobayashisanchinomaidragon.safetensors
, then use kobayashi_kobayashisanchinomaidragon.pt
as texture inversion embedding, and use
kobayashi_kobayashisanchinomaidragon.safetensors
as LoRA at the same time.
このモデルには2つのファイルがあります。一緒に使う必要があります!!!。
この場合、kobayashi_kobayashisanchinomaidragon.pt
とkobayashi_kobayashisanchinomaidragon.safetensors
の両方をダウンロード
する必要があります。kobayashi_kobayashisanchinomaidragon.pt
をテクスチャ反転埋め込みとして使用し、同時にkobayashi_kobayashisanchinomaidragon.safetensors
をLoRAとして使用してください。
这个模型有两个文件。你需要同时使用它们!!!。
在这种情况下,您需要下载kobayashi_kobayashisanchinomaidragon.pt
和kobayashi_kobayashisanchinomaidragon.safetensors
这两个文件,然后将kobayashi_kobayashisanchinomaidragon.pt
用作纹理反转嵌入,
同时使用kobayashi_kobayashisanchinomaidragon.safetensors
作为LoRA。
이 모델은 두 개의 파일이 있습니다. 두 파일을 함께 사용해야 합니다!!!.
이 경우에는 kobayashi_kobayashisanchinomaidragon.pt
와 kobayashi_kobayashisanchinomaidragon.safetensors
두 파일을 모두 다운로드하신 다음에 kobayashi_kobayashisanchinomaidragon.pt
을 텍스처 반전 임베딩으로 사용하고,
동시에 kobayashi_kobayashisanchinomaidragon.safetensors
을 LoRA로 사용하셔야 합니다.
(Translated with ChatGPT)
The trigger word is kobayashi_kobayashisanchinomaidragon
, and the recommended tags are best quality, masterpiece, highres, solo, {kobayashi_kobayashisanchinomaidragon:1.15}, glasses, red_hair, blush, ponytail, bangs, closed_mouth, brown_eyes
.
How This Model Is Trained
This model is trained with HCP-Diffusion. And the auto-training framework is maintained by DeepGHS Team.
Why Some Preview Images Not Look Like Kobayashi Kobayashisanchinomaidragon
All the prompt texts used on the preview images (which can be viewed by clicking on the images) are automatically generated using clustering algorithms based on feature information extracted from the training dataset. The seed used during image generation is also randomly generated, and the images have not undergone any selection or modification. As a result, there is a possibility of the mentioned issues occurring.
In practice, based on our internal testing, most models that experience such issues perform better in actual usage than what is seen in the preview images. The only thing you may need to do is adjusting the tags you are using.
I Felt This Model May Be Overfitting or Underfitting, What Shall I Do
Our model has been published on huggingface repository - CyberHarem/kobayashikobayashisanchinomaidragon, where models of all the steps are saved. Also, we published the training dataset on huggingface dataset - CyberHarem/kobayashikobayashisanchinomaidragon, which may be helpful to you.
Why Not Just Using The Better-Selected Images
Our model's entire process, from data crawling, training, to generating preview images and publishing, is 100% automated without any human intervention. It's an interesting experiment conducted by our team, and for this purpose, we have developed a complete set of software infrastructure, including data filtering, automatic training, and automated publishing. Therefore, if possible, we would appreciate more feedback or suggestions as they are highly valuable to us.
Why Can't the Desired Character Outfits Be Accurately Generated
Our current training data is sourced from various image websites, and for a fully automated pipeline, it's challenging to accurately predict which official images a character possesses. Consequently, outfit generation relies on clustering based on labels from the training dataset in an attempt to achieve the best possible recreation. We will continue to address this issue and attempt optimization, but it remains a challenge that cannot be completely resolved. The accuracy of outfit recreation is also unlikely to match the level achieved by manually trained models.
In fact, this model's greatest strengths lie in recreating the inherent characteristics of the characters themselves and its relatively strong generalization capabilities, owing to its larger dataset. As such, this model is well-suited for tasks such as changing outfits, posing characters, and, of course, generating NSFW images of characters!😉".
For the following groups, it is not recommended to use this model and we express regret:
- Individuals who cannot tolerate any deviations from the original character design, even in the slightest detail.
- Individuals who are facing the application scenarios with high demands for accuracy in recreating character outfits.
- Individuals who cannot accept the potential randomness in AI-generated images based on the Stable Diffusion algorithm.
- Individuals who are not comfortable with the fully automated process of training character models using LoRA, or those who believe that training character models must be done purely through manual operations to avoid disrespecting the characters.
- Individuals who finds the generated image content offensive to their values.