inuko v3
prompts
inuyama_aoi, mediam_hair, breasts, bangs, brown_hair, hair_ornament,, side_ponytail, swept_bangs, thick_eyebrows, hair_bobbles, skin_fang, green eyes , fang, sailor_collar,, sweater, , school uniform, yellow sweater, , serafuku, pleated skirt, checkered clothes, white_neckerchief
# sd -scripts command
Generate command : RedRayz/Kohya_lora_param_gui: GUI for kohya-ss sd-scripts (github.com)
LoRA-C3Lier (conv2d)
pagedAdamW8bit
Dim64 , Alpha32
CUT: IN07-08, OUT10-11
accelerate launch --num_cpu_threads_per_process 4 train_network.py --pretrained_model_name_or_path "C:\ProgramTools\__GenerativeAI\StableDiffusionModels\nai.ckpt" --train_data_dir "H:\Resources\Resources_AI\forLeaningImages\Yurucamp\inuyama_aoi\v2" --output_dir "H:\Resources\Resources_AI\__Models\__Lola\__MyLoRAs" --network_module "networks.lora" --network_args "conv_dim=64" "conv_alpha=32" "down_lr_weight=1,1,1,1,1,1,1,0,0,1,1,1" "mid_lr_weight=1" "up_lr_weight=1,1,1,1,1,1,1,1,1,1,0,0" --xformers --persistent_data_loader_workers --cache_latents --max_token_length 225 --max_data_loader_n_workers 1 --enable_bucket --save_model_as "safetensors" --lr_scheduler_num_cycles 4 --mixed_precision "fp16" --learning_rate=1e-04 --text_encoder_lr=5e-05 --resolution 768 --train_batch_size 2 --max_train_epochs 12 --network_dim 64 --network_alpha 32 --shuffle_caption --keep_tokens 1 --save_every_n_epochs 1 --optimizer_type "PagedAdamW8bit" --lr_warmup_steps 200 --output_name "lora_yurucamp_inuyama_aoi_v3_pagedAdamW8bit_d64a32_NP5" --clip_skip 2 --save_precision "fp16" --lr_scheduler "cosine" --min_bucket_reso 320 --max_bucket_reso 1536 --caption_extension ".txt" --seed 42 --logging_dir "E:\Creative\Program\Github\__AI\Art\TraningLog\sd_scripts\log" --training_comment "lora_yurucamp_inuyama_aoi_v3_pagedAdamW8bit_d64a32_NP5" --sample_every_n_steps=537 --sample_prompts="H:\Resources\Resources_AI\forLeaningImages\__SamplePrompts\sample_prompts_basic1girl_inuko.txt" --sample_sampler=euler --log_prefix=inuko_v3
# Extract only the necessary layers
Tools: SuperMerger
enable only IN02_IN04_OUT03-06
set to lbwpreset.txt
CHARA:0,0,1,1,0,0,0,0,1,1,1,1,0,0,0,0,0
Enter text "LoRA_name:1:CHARA"
Merge LoRA
Done
# Additional information
I think you can get the same result even if you only learn IN02_IN04_OUT03-06 at the time of training.