Sign In

【GoodBye,AllQualityWords】bdsqlsz LoRA training Advanced Tutorial(3):Best Finetune with DB

【GoodBye,AllQualityWords】bdsqlsz LoRA training Advanced Tutorial(3):Best Finetune with DB

https://civitai.com/models/121215/stable-diffusion-xl-anime

i had finetune this checkpoint just 1hour and without any images.

it looks like better then old way to spend too much Gpu time.

this is what i want to say in this guide.

LECO,use checkpoint itself to generate datasets for finetuning~

Erasing Concepts from Diffusion Models

https://github.com/rohitgandikota/erasing/

This is original repo for Erasing Concpts.

How it works?

Imagineyou generate 2 images.

1 is your checkpoint target image(like trigger words)

2 is your dataset image.

just let 2 prompt effect replace to 1 prompt effect

like "1girl,master piece,best quality“ replace “1girl”

”1boy,master piece,best quality” replace ”1boy”

OK,we just filter checkpoint to ”master piece,best quality” to default style such as SD-webui style-selector.

we dont need more quality words,just change it to a lora then merge into checkpoint.

Wait,it maybe make pollution ?

No!!!

All dataset generate from checkpoint itself,and between target and dataset,

they just offset the additional part.

What different with TI?

Thats actually use Unet to finetune

Why it is so faster?

Because it calculates the difference between embedding and processes tensors.

There is no need to output any real images, and even VAE is not used.

Install

1、Thank plat for implement it.

you can directly use it on Colab below this link↓

p1atdev/LECO: Low-rank adaptation for Erasing COncepts from diffusion models. (github.com)

2、if you use windows,you can download my scripts modify from plat's repo to install automatic.

LECO LoRA train(8GB) SDXL LORA train(24GB) - v2.0 | Stable Diffusion Other | Civitai

you just need python > 3.10.6 and < 3.11

Train

LECO use 2 config files,one is for training setting,another is for promot setting.

1、open example/neg4all_config.yaml

prompts_file: "./examples/neg4all_prompts.yaml"

pretrained_model:
  name_or_path: "D:\\sd-webui-aki-v4.1\\models\\Stable-diffusion\\sd_xl_base_1.0_fixvae_fp16_V2.safetensors" # you can also use .ckpt or .safetensors models
  v2: false # true if model is v2.x
  v_pred: false # true if model uses v-prediction

network:
  type: "c3lier" # or "c3lier"
  rank: 32
  alpha: 1.0
  training_method: "full" # selfattn, xattn, noxattn, or innoxattn

train:
  precision: "bfloat16"
  noise_scheduler: "ddim" # or "ddpm", "lms", "euler_a"
  iterations: 600
  lr: 1e-4
  optimizer: "adam8bit"
  lr_scheduler: "consine"
  max_denoising_steps: 50

save:
  name: "neg4all_bdsqlsz_xl_8.0"
  path: "./output"
  per_steps: 200
  precision: "bfloat16"

logging:
  use_wandb: true
  verbose: true

other:
  use_xformers: true

which your need change:

prompt file path :

choose your prompt setting

name_or_path:

choose your checkpoint,i recommend SDXL-base-fixed vae_V2

c3lier means lycoris locon

lierla means common lora

change save name to what your want

other dont need to change。

2、open example/neg4all_prompts.yaml

a formatting prompt setting like this↓

- target: "van gogh" # what word for erasing the positive concept from
  positive: "van gogh" # concept to erase
  unconditional: "" # word to take the difference from the positive concept
  neutral: "" # starting point for conditioning the target
  action: "erase" # erase or enhance
  guidance_scale: 1.0
  resolution: 512
  dynamic_resolution: false
  batch_size: 2

action:choose erase or enhance

resolution: base res for train,SD1.5 use 512,SDXL use 1024

dynamic_resolution: random bucket for training,max res is base resolution,min res is base / 2

batch size: SDXL use 1 for 4090 24G,SD1.5 use 4 for 4090 24G

OK,what is target、positive、unconditional、neutral and guidance_scale?

target prompt= you lora trigger words,it is target images prompt.

Most of the time, I choose to use "girl" or "boy."

positive、unconditional、neutral、guidance_scale is SD-WEBUI prompt and negative prompt how to works.

dataset prompt:

neutral prompt + (positive prompt-unconditional promt)* guidance_scale(CFG)

if i want to generate this prompt like this image.

- target: "1girl" # what word for erasing the positive concept from
  positive: "masterpiece, best quality, ultra high res, 8k, clearly fine detailed" # concept to erase
  unconditional: "worst quality, low quality, ugly" # word to take the difference from the positive concept
  neutral: "1girl" # starting point for conditioning the target
  action: "enhance" # erase or enhance
  guidance_scale: 12

it is easy to understand.

positive= positive prompt - neutral(in this example is 1girl)

unconditional= negative prompt

neutral = 1girl

guidance_scale = CFG

action erase or enhance decide guidance_scale Positive and Negative Signs

if erase,

dataset =

neutral prompt - guidance_scale(CFG)*(positive prompt-unconditional promt)

if enhance,

dataset =

neutral prompt +guidance_scale(CFG)*(positive prompt-unconditional promt)

after editting what prompt you want to train,save it.

3、edit train_leco.ps1

# LoRA train script by @bdsqlsz

#训练模式(Lora、Sdxl_lora)
$train_mode = "sdxl_lora"

# Train data config | 设置训练配置路径
$config_file = "./examples/neg4all_config.yaml" # config path | 配置路径

# ============= DO NOT MODIFY CONTENTS BELOW | 请勿修改下方内容 =====================
# Activate python venv
.\venv\Scripts\activate

$Env:HF_HOME = "huggingface"
$Env:XFORMERS_FORCE_DISABLE_TRITON = "1"
$ext_args = [System.Collections.ArrayList]::new()
$laungh_script = "train_lora"

if ($train_mode -ilike "sdxl*"){
  $laungh_script = $laungh_script + "_xl"
}

# run train
python "./$laungh_script.py" `
  --config_file=$config_file

Write-Output "Train finished"
Read-Host | Out-Null ;

just change 2 point

train model choose lora(SD1.5、SD2.X) or sdxl_lora

config file path choose your config path

save it

4、Right click train_leco.ps1

and run with powershell.

ok,training is starting~

Wait for almost 1 hour(SD1.5 just 20minutes)

LECO Lora is coming~

5、you can use some merge scripts for merge it into checkpoint,such as supermerge.

more details you want to know just comment on here.

Support 青龍聖者@bdsqlsz on Ko-fi! ❤️. ko-fi.com/bdsqlsz - Ko-fi ❤️ Where creators get support from fans through donations, memberships, shop sales and more! The original 'Buy Me a Coffee' Page.

26

Comments