santa hat
deerdeer nosedeer glow
Sign In

My Personal LoRA Workflow

My Personal LoRA Workflow

My Personal LoRA Workflow

Sidenote 2024:
I Post my current Workflow on a KanBoard Tracker and also Post the Epoch Images in my Privat Gallery. Feel Free to Check it out!

I've often been asked, "How can you create LoRAs so quickly?"

Well, the answer is relatively simple. Here's why:

I've established my own personal workflow, which you can see in the title image. I use a KanBoard to organize my tasks and have a dedicated "GPU Cluster" that operates around the clock. Most of the processes are automated. This means I primarily focus on preparing the initial dataset. Tasks such as collecting, cleaning, upscaling/denoising, tagging, sorting, and more are handled by a separate AI. The same goes for configuring the training.

The inspiration for this setup came from this guide:

Guide: Make Your Own LoRAs Easy and Free

I adapted the guide to fit my local environment. Now, all I have to do is prepare the dataset and review the results at the end. Of course, it's a learning process. Not every LoRa will resonate with everyone, so I continuously refine and improve them based on feedback.

To be completely honest, I'm still a novice when it comes to creating prompts. So, I use a little help from this script:

Tool by Peaksel: Prompting Helper Script - StylePile on Steroids

My Tools:

In my workflow, I employ a variety of specialized tools, including:

  • Kohya_ss: For tagging and training.

  • Custom Scripts: Utilizing a mix of PowerShell and Python, these scripts handle tasks like clearing, tagging, renaming, and sorting.

  • Topaz AI: Enhances the source quality of the dataset through denoising and upscaling.

  • Lama Cleaner: Essential for removing unwanted image details, such as other individuals, text, and game character HUDs.

  • SD.Next: Paired with "Tool by Peaksel", this is my go-to for image creation.

At the heart of the operation is a privateGPT AI, a self-trained model, which manages intermediate steps and determines the necessary training procedures, methods, and more. The entire system is hosted in my compact yet impressive server cabinet, spread across two servers. Server 1, named "GPU Cluster", boasts 2x Xeon 50-core CPUs (3.5GHz per core), 5 Nvidia Quadros (totaling 128GB VRAM), and 128GB RAM. Server 2, dubbed "AI Core", is equipped with an AMD Ryzen 5 4500 6-core 3.2GHz, an RTX4090 with 16GB VRAM, and 64GB RAM. To cater to storage needs, I have a cluster of 4 NAS (Asustor) units connected via fiber cable, offering a total storage capacity of around 1 petabyte.

I believe the roles of each server are self-explanatory. The 'AI Core' serves as the nerve center and brain of the operation, while the 'GPU Cluster' provides the muscle power.

To keep track of everything seamlessly, I use KanBoard for automatic status tracking.

And yes, I can already hear the echoes. I'm that IT nerd with glasses, long hair, and I spend more time in the basement than in the fresh air. xD Still, anyone can build something like this with enough motivation and patience.

Why am I sharing this?

Firstly, because I'm frequently asked about my process.

Secondly, I believe in continuous improvement. Constructive feedback is invaluable to me. Once I've ironed out all the kinks, I might even release my training framework for others to use.

So, I kindly ask you to review my work and provide critical yet constructive feedback, especially a star rating. It costs you nothing more than a click, but for creators like me, it's invaluable. This isn't just about my LoRAs. Every creator thrives on quality feedback and ratings to produce top-notch work.

Cheers to our amazing community! Let's continue to grow and support each other.

Got Questions?
Contact me on the Civitai Discord or join my MatterMost.

Update #1:

Introducing one of my valuable tools for manual correction work, the Dataset Helper!

You can find it on GitHub: Dataset Helper

Check the GitHub for Updates and newer Files.

This versatile tool is designed to simplify various tasks, making your workflow more efficient:

  1. Project Folder Creation: Create a basic folder structure effortlessly.

  2. Image Conversion: Seamlessly convert a wide range of image formats to PNG without any loss in quality.

  3. Tag Removal: After your tagging process with Kohya_ss, use this tool to easily remove any unwanted tags.

The Dataset Helper is open source and completely free under the GNU GPL license, ensuring accessibility and affordability for everyone. Feel free to explore, contribute, and make the most of this handy utility!

This is just one part of my toolkit designed to make your tasks easier. I'm constantly working on new features and improvements to provide you with an even better user experience. Your feedback and support are greatly appreciated as we work together to enhance your workflow and simplify your tasks.

Keep in Mind:
This is just a simple tool. No mysterious background activities, no automatic updates, and definitely no hidden coffee-making features (although that would be pretty cool!). It's designed to get the job done without any unnecessary bells and whistles. So, don't expect high-res graphics or special effects when removing tags—it's all about efficient functionality! 😄

Update #2:

I´m currently moving from my well known Matrix Model over to NAI. So need to change some things in the Code. (Calculating the required steps/Epoch/Training Approch etc...)

I will release a few Models for Testing.

Update #3

The transition to the new Model NAI is complete, and the code is running smoothly. However, I have noticed a slight change in hardware requirements with the new model. The GPUs are getting warmer now, necessitating a reevaluation and enhancement of the cooling system for my GPU cluster. This is a minor issue, but addressing it is important.

The positive feedback I have received from all of you is a significant motivation to continue moving forward. Thank you all for your support!

Update #4 - Last Arc

I'm excited to share that I've successfully streamlined my workflow! You can now easily track my progress on this page: Workflow Tracker. Most of the process has been automated, leaving me with just the Initial Dataset Preparations and the final review to handle.

I'm truly thrilled about this development, and I owe a big thanks to those who provided valuable feedback, which helped me refine the scripts and training process.

On another note, I'm currently planning to resurrect some old anime gems. If you have any recommendations, please don't hesitate to share them with me!

Free Addition

Introducing the Steps Calculator with Recommendations!

Are you delving into deep learning and often find yourself pondering about how many training steps are needed for your dataset? Look no further! The Steps Calculator is here to rescue. This intuitive tool not only calculates the number of training steps based on the number of images, repeats, epochs, and batch size, but it also provides personalized recommendations on repeats and epochs depending on your dataset size.

Open-source and freely available, the Steps Calculator is designed to simplify and enhance your machine learning journey. Created with care for the community, it's a valuable addition to any data scientist's toolkit.

Explore the tool now:



24

Comments