Sign In

Training a Lora for Playground V2.5 - Simple GUIDE

Training a Lora for Playground V2.5 - Simple GUIDE

In this short text, I will provide a step-by-step guide on how I trained Lora in Playground V2.5. It's important to note that at this moment, Lora is not yet implemented in the Kohya repository, so it's necessary to use it directly from the diffusers repository.

It's worth mentioning that it's not possible to train with Kohya due to EDM and subtle architecture changes.

The script used is the advanced one available here:

https://raw.githubusercontent.com/huggingface/diffusers/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py

I'm also providing a script that can be used to convert datasets in Kohya format (folders with txt + image) to be uploaded to HF.

https://gist.githubusercontent.com/artificialguybr/a1c58ad578d0446d493c9793093196e1/raw/b8cede5d9ad5f9911de70ac187c6b078eea693c8/sd-dataset-to-hf-dataset.py

Installation.

To finetune it, you need to pip install:

```bash

!pip install huggingface_hub datasets pillow xformers bitsandbytes transformers accelerate wandb dadaptation prodigyopt torch -q

!pip install peft -q

!pip install git+https://github.com/huggingface/diffusers.git -q

```

And then run !accelerate config default.

Running the Script

You will run the script like this:

```bash

#!/usr/bin/env bash

!accelerate launch train_dreambooth_lora_sdxl_advanced.py \

--pretrained_model_name_or_path="playgroundai/playground-v2.5-1024px-aesthetic" \

--dataset_name="$dataset_name" \

--instance_prompt="$instance_prompt" \

--validation_prompt="$validation_prompt" \

--output_dir="$output_dir" \

--caption_column="$caption_column" \

--do_edm_style_training \

--mixed_precision="bf16" \

--resolution=1024 \

--train_batch_size=3 \

--repeats=1 \

--report_to="wandb"\

--gradient_accumulation_steps=1 \

--gradient_checkpointing \

--learning_rate=1e-5 \

--optimizer="AdamW"\

--lr_scheduler="constant" \

--rank="$rank" \

--max_train_steps=2000 \

--checkpointing_steps=2000 \

--seed="0" \

--push_to_hub

```

Script Parameters.

- dataset_name: Your HF path to your dataset.

- instance_prompt: When custom captions are enabled, this prompt is still used in case there are missing captions, as well as in the model's readme. If custom captions are not used, this prompt will be used as the caption for all training images.

- validation_prompt: This prompt is used to generate images throughout the training process, allowing you to see the model's learning curve during training. You can also change num_validation_images (4 by default) and validation_epochs (50 by default) to control the number of images generated with the validation prompt and the number of epochs between each dreambooth validation.

- caption_column: The name of the caption column in the HF dataset.

Recommended Settings.

The best results I obtained were with 1e-5 or Prodigy. It's important to be careful with overfitting.

Here, I also provide a link where you will find a workflow to be used for inference and the Colab for training:

Download here:

- https://huggingface.co/artificialguybr/Playground-V2.5-Lora-Colab-Inference-Comfyui

or

- https://github.com/artificialguybr/Playground-V2.5-LoraCreator-Inference/tree/main

ATTENTION: I DONT MADE THE WORKFLOWS. COMMUNITY AVAILABLE ON COMFYUI WORKFLOWS.

5

Comments