Sign In
ZyloO's LoRa Training & Preset

👋 Introduction

This guide provides an overview of my training process for LoRas using Kohya_ss.

Training a LoRa model involves several steps, from preparing your dataset to fine-tuning the model for optimal performance. While this guide offers a brief walkthrough of my process, it will not delve deeply into all settings, as I am sharing my preset. Some values will need to be adjusted based on your hardware and data, such as the number of epochs and batch size.

📦 Preparations

Required Installations

First, you need to have Kohya_ss installed. Please refer to the Kohya_ss GitHub page for installation instructions.

Directory Structure

Ensure you have the following folder structure:

·Root 
  ·loraName
    ·img
      ·repeats_subject (explained later)
    ·log
    ·model

Essential Requirements

  1. Collection of Images: Gather high-quality images of your character/style. Quality is more important than quantity. Aim for 15-50 high-quality images.

  2. Captions: Generate captions using Kohya_ss Utilities -> Captioning. WD14 Captioning is commonly used, but BLIP can be better in some situations. Add a unique prefix (token) and use the default settings.

📝 Steps Calculation

The number of training steps depends on the number of images, the epochs you train for, and the batch size. Generally, aim for around 3500 steps, with a minimum of 2500. Here's how to calculate it in a dataset of 23 images:

  1. Steps Calculation:

    • Formula: number of images * repeats ≈ 900

    • Example: For 23 images, aim for around 900 steps:

      23 images * 40 = 920
    • The result (e.g., 40) goes into the folder name as repeats_subject = 40_subject.

  2. Subject Naming:

    • Based on what you are training, change the subject in the folder name. For example, if the dataset is for women:

       40_woman
  3. Total Steps and Epoch Calculation:

    • Aim for total steps between ~3000-4000, with a minimum of 2500:

      steps * epoch / batch size = ~3000-4000
    • For example, using the previous values:

      920 * 7 / 2 = 3220
    • A batch size of 2 is recommended for efficiency, but it may vary based on hardware. Personally, a batch size of 1 provides better iterations per second (it/s).

🎛 Configuring Kohya

Once your folder structure is set up, and you have your images and captions ready, it’s time to start training.

LoRA Tab Configuration

  1. Load Preset: Select the "LoRA" global tab in Kohya_ss, and load the preset shared in this guide by selecting "Configuration file" -> "Open" and choosing the provided .json file. This will prefill most of the necessary settings.

  2. Source Model Tab:

    • Pretrained model name or path: Select the base model you want to use for creating the LoRA. Use the button on the right to choose the .safetensor file, or manually enter the full path.

  3. Folders Tab:

    • Image Folder: Enter the path to the img folder (not the repeats_subject folder inside it). You can either use the button on the right to select it or manually enter the path.

    • Output Folder: Enter the path to the model folder where the trained model will be saved.

    • Logging Folder: (Optional) If you want to keep track of the training process, specify the path to the log folder.

    • Model output Name: Enter the name that your model will have once training is complete.

  4. Parameters Tab:

    • Train batch size: This is set to 1 in the template, which I find to be the fastest in my case. If you wish to change it, you’ll need to adjust the epoch and repeats accordingly. You can recalculate these values using the provided script in the Attachments, or by using my management software.

    • Epoch: This is preset to 4 but can be adjusted based on your needs. Be sure to recalculate the overall steps if you make changes.

Once everything is configured, you're ready to click "Start Training" :)

🗂 LoRA Manager

To simplify the management process, you can use my application, which is available on GitHub. Although not perfect, it can be useful. Features include:

  • Tree view for dataset folders

  • Folder status indicators (Pending, Done, Retrain)

  • Detailed dataset information (image count, captions, model presence)

  • Create, delete, and open dataset folders

  • Copy folder paths

  • Launch Kohya_ss and BooruDatasetTagManager

  • Calculate and display repetitions and epochs

  • Change the root directory for all datasets

📜 Training Preset

I uploaded my preset for training with Kohya_ss, which can get updated regularly. All changes and updates will be documented in the Changelog section.

Hardware Reference

For reference, my training hardware includes:

  • GPU: RTX 4090

  • CPU: i7 13700k

  • RAM: 64 GB 6400 MHz

Training a character for 3640 steps takes approximately 34 minutes, with an average speed of 1.75 iterations per second (it/s).

🖼 Outputs

This characters have been trained using this method and preset (Pony Realism v2.1)📎

🌞 Requests

Support

  • If you'd like to support me, feel free to do so on Ko-Fi.

📑Changelog

  • Renamed the json preset from "PonyRealismTrainingPreset" to "ZyloOsTrainingPreset"

  • I shifted the focus of the article away from Pony Realism training to emphasize that the preset and guide can be applied to most models. Since this is the preset I use for training on any model.

  • Added some more basic explanation on the values to enter on the "Source Model" and "Folders" tabs.

  • Updated the preset to change batch size 2 to 1, added some explanation on the "Folders" tab fields.

  • Since this attracted more attention than I expected, I decided to structure it better, update it, and provide more general steps. I also uploaded my LoRA manager software that I use.

  • Added a batch script to automate the calculation based on your number of images to attachments.

  • Changed min steps calculation from avg 2500 to avg of 3000-4000

  • Added both example images datasets to attachments.

488

Comments