Sign In

Incomplete anime pony

240
1.4k
64
Updated: Mar 13, 2024
base modelanimepony
Verified:
SafeTensor
Type
Checkpoint Trained
Stats
1,146
Reviews
Published
Mar 11, 2024
Base Model
Pony
Hash
AutoV2
C5DAA39E36
default creator card background decoration
Silver Creator Badge
Chenkin

Version Alpha update:

-Further fine-tuning the furry style in the pony base model to a Japanese anime illustration style.

-The current version of the model can be directly used for generating images.

The base model remains chenkin_20w.safetensors trained using a 200,000 training dataset.

Several manually selected high-quality illustrations are used for targeted training (provided by Euge).

Each illustration is of exceptional quality, labeled as "amazing."

The trained model is merged with chenkin_20w.safetensors with a weight of 0.7.

Therefore, the current quality indicators are:

amazing,best,hight,score_9,

or simply:

amazing,best,hight,

Alpha版本更新:

-pony底模中的furry风格进一步微调为日系动漫插画风格。

-现在版本的模型,可以直接用于出图。

底模依然为使用20w训练集训练的chenkin_20w.safetensors

对若干张人工挑选的超高质量插画进行针对训练。(由尤吉提供)

每一张都是千中挑一的水准,打上了`amazing` 的质量标签。

训练后的模型,以0.7的权重与chenkin_20w.safetensors 合并。

因此,现在的质量提示词是:

amazing,best,hight,score_9,

或者,仅仅使用:

amazing,best,hight,

特别鸣谢:算力赞助 / GPU Sponsor: Neta

特别鸣谢:算力赞助 / GPU Sponsor: nieta.art


This model may not be suitable for generating images and it is not recommended for beginners to download.

Intended as a reference (or lesson) for colleagues who are training on Pony Diffusion V6 XL.

Based on Pony Diffusion V6 XL, trained on 200k anime images, using an A40 48G for over 7 days.

The model was trained to generate images in the style of Japanese anime, but did not achieve the expected results.

1. chenkin_pony.safetensors

Pretrained on 13k random anime images.

2. chenkin_20w.safetensors (Current model)

Trained on top of chenkin_pony.safetensors.

Used 190k selected images from yande with no quality labels (provided by Miss Erity).

Used 10k manually selected high-quality illustrations with the "hight" quality label (provided by Euge).

Using 100 manually selected ultra-high-quality illustrations with the "amazing" quality label (provided by Euge).

amazing,hight,score_9 

Trained using the bmaltais/kohya_ss (github.com) project.

This project consumed a significant amount of GPU computing power, thanks to Neta for providing the computing resources.

Special thanks to GPU Sponsor: Neta.

这个模型可能不适合用来生成图片,同时不建议初学者下载。

旨在为更多在Pony Diffusion V6 XL上训练的同仁提供参考(或者说教训)

基于Pony Diffusion V6 XL 训练,使用200k的动漫图片, 在 A40 48G 训练了超过7天。

该模型是为了更好生成日本动漫风格的图片而训练,但是没有取得预期。

1.chenkin_pony.safetensors

使用了13k随机动漫图片进行预训练。

2.chenkin_20w.safetensors (当前模型)

在chenkin_pony.safetensors的基础上进行训练

使用了190k张来自yande的精选图片,无质量标。(由二小姐提供)

使用10k张人工挑选的高质量插画,打上了`hight` 的质量标签。(由尤吉提供)

使用100张人工挑选的超高质量插画,打上了`amazing` 的质量标签。(由尤吉提供)

amazing,hight,score_9 

使用bmaltais/kohya_ss (github.com) 项目进行训练。

此项目耗费了大量的GPU的算力,感谢 Neta 愿意为我提供算力。

特别鸣谢:算力赞助 / GPU Sponsor: Neta

训练参数如下(train set):

[sdxl_arguments]
cache_text_encoder_outputs = false
no_half_vae = false
min_timestep = 0
max_timestep = 1000

[model_arguments]
pretrained_model_name_or_path = "/root/autodl-tmp/stable-diffusion-webui/models/Stable-diffusion/chenkin_pony/chenkin_pony.safetensors"

[dataset_arguments]
shuffle_caption = true
debug_dataset = false
train_data_dir = "/root/autodl-tmp/20w"
dataset_repeats = 1
keep_tokens_separator = "|||"
resolution = "1024, 1024"
caption_dropout_rate = 0
caption_tag_dropout_rate = 0
caption_dropout_every_n_epochs = 0
token_warmup_min = 1
token_warmup_step = 0
enable_bucket = true
min_bucket_reso=640 
max_bucket_reso=2048
bucket_reso_steps=64


[training_arguments]
output_dir = "/root/autodl-tmp/stable-diffusion-webui/models/Stable-diffusion/chenkin_20w"
output_name = "chenkin_20w"
save_precision = "fp16"
train_batch_size=6         
vae_batch_size=4 
max_train_epochs=1 
save_every_n_steps = 2000
max_token_length = 225
mem_eff_attn = false
xformers = true
sdpa = false

max_data_loader_n_workers = 8
persistent_data_loader_workers = true
gradient_checkpointing = true
gradient_accumulation_steps = 1
mixed_precision = "fp16"

[sample_prompt_arguments]
sample_every_n_steps = 200
sample_sampler = "euler_a"
sample_prompts="/root/example.txt"

[saving_arguments]
save_model_as = "safetensors"

[optimizer_arguments]
optimizer_type = "AdaFactor"
learning_rate = 7.5e-7
train_text_encoder = false
learning_rate_te1 = 0
learning_rate_te2 = 0
optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False",]
lr_scheduler = "constant_with_warmup"
lr_warmup_steps = 100
max_grad_norm = 0