Sign In

Creating a Lora by ranking images [Leco score v2]

Creating a Lora by ranking images [Leco score v2]

Leco score v1: https://civitai.com/articles/4216

Leco score v1 sample: https://civitai.com/models/317942

Leco score v2: https://civitai.com/articles/5416

Leco score v2 sample: https://civitai.com/models/471936

Leco score v2 training guide: this

Step 1: Rank some images

  • download the training data from the LecoScoreV2 example Lora, unzip somewhere

  • there is a very small image ranking site/app with 5k highly ranked images from civitai

  • go to the site folder 'cd rate_images'

  • launch the flask app 'python rate_image_website.py' ('pip install flask' if you don't have flask)

  • browse to 'localhost:5000' and rate some images, every time you click a text file should appear with the same name as the image

  • note: try to keep the ratings consistent (if you want more colorful images, always give one more star to colorful images for example)

  • note: do not give ratings to bad images (signature, XYZ prompt) to exclude them from the dataset

  • note: the images have been ordered to be maximally distant from one another, so ranking the first images should give a decent amount of information to the score

  • note: all images are on one page, this is just a few lines of code

Step 2: Put the result into a tensor

  • 'python ratings_to_tensor.py', this will create 'scored_latents.py'

  • it just runs the images through the VAE and puts the latents and scores in a .safetensors

Step 3: Finetune resnet

  • this will create weights for an image analyser that will replicate your score (approximately)

  • copy scored_latents.safetensors next to finetune_resnet.py

  • I left a bug, replace 'resnet_pretrained = safe_open("resnet_pretrained.ckpt")' by 'resnet_pretrained = safe_open("resnet_pretrained.safetensors", framework="pt")' in finetune_resnet.py

  • 'python finetune_resnet.py', you need timm (for resnet implementation), pytorch_lightning (because I'm too lazy to write backward/step) and sklearn (for the test/val split function)

  • this should give you a set of weights in checkpoints_finetune called 'finetune-best-$valscore$.ckpt'

Step 4: Train a network with Leco

  • make Leco work on your machine

  • 'pip install timm' on your Leco environment to get the reference resnet implementation

  • copy the provided yaml files to the examples directory

  • edit them (you can change lora dimension, output directory, base SD model)

  • copy the finetuned resnet 'finetune-best-$valscore$.ckpt' to the leco base directory and rename it 'finetuned_resnet.ckpt'

  • copy the provided 'train_lora_score.py' to the leco home directory

  • launch training with 'python train_lora_score.py --config_file examples/parti_aesth.yaml'

  • note: after training has run for ~300 steps, you can start testing the Lora with weight ~10

  • note: the loss given is actually the score multiplied by -1000, so it should start around -2500 (if you gave ratings of 2.5 on average) and should get progressively lower (-2700 -> 2.7 after 3000 steps for me)

3

Comments