Hello everyone,
today is December 31st, and I’d like to share a short review of the year 2025.
From User to Creator
I actually joined here in fall 2024, but at first only as a user. I downloaded and used many models — the pool of choice was overwhelming.
Pony, NoobAI and Illustrious
At first, I started with the Pony base model, since it was recommended for beginners. The quality was okay, but I often felt that the characters didn’t look quite right — they were usually a bit different from the original.
NoobAI was great, but at the time I tried it, it was still pretty new, and many LORAs weren’t compatible with it.
Finally, I switched to Illustrious — and that was a great decision. Illustrious is very stable, looks great, and usually generates characters very close to the original (depending on the LORA, of course). And since it became very popular, many creators started releasing their models using it as the base checkpoint.
Fast, faster, fastest
I spent a lot of time creating images of my favorite characters, but in the beginning I was simply using the vanilla Automatic1111 — and that was a real pain 😅🤣
Generations took soooo loooong, and enhancing the images (ADetailer, Ultimate Upscale) took even more time.
So I searched the internet and found a solution: WebUI ReForge — kind of an enhanced version of the vanilla Automatic1111. It worked much faster, and all my extensions were compatible 🤩
But shortly after that, I read about WebUI Forge — an even better enhancement of the vanilla WebUI. And hell yeah, it was great — BUT it was built on Gradio 4, so many extensions were no longer compatible.
Fortunately, the internet moves fast: many developers updated their extensions for Gradio 4, and the creator of Forge even integrated a few important extensions directly into Forge itself.
After a few months, I found out that Forge was no longer being updated, since the creator took a break from developing and maintaining it. That’s when I discovered my current setup: Forge Classic, which uses new attention-processing methods like SageAttention, FlashAttention, etc.
good, better, the best
My first generations looked absolutely hideous 🤣🤣🤣.
I was like: “Who needs quality tags? What are Danbooru tags? I’ll just tell the model what I want and it will magically generate it. No need to read the model description whatsoever.”
Yeah… and the results were just as bad as you’d expect 😅
So I finally came to my senses and actually read the freaking model description:
“score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up… just describe what you want, tag1, tag2…”
“score_9 — what the heck is that!?”
That was my first reaction to quality tags 😅
Then I learned about Danbooru tags — and after applying what I’d learned, the images started to look much better. But I still wasn’t satisfied.
Next, I read about Hires Fix and the Adetailer extension. I installed and configured them — and the results improved again!
But there was still room for improvement… so I moved on to img2img with Ultimate Upscale and 4x upscaling.

The Blackwell Disaster
It was early 2025, and I thought: I want it even faster.
So I started looking for a new graphics card. RTX 5090? 5080? — yeah… absolutely not in my price range 😅
So I decided to buy an RTX 5070 Ti. Installed it, updated the drivers, started Forge…
Error.
PyTorch not compatible.
I asked Dr. Google:
Turns out I needed PyTorch ≤ 2.8 (nightly build). Installed it.
Error again.
xFormers not compatible.
So I turned it off (you probably don’t even need it on modern GPUs anyway).
Finally… we were ready to go!
The missing Yuki
Like I said before, the pool of choice on Civitai was overwhelming.
Still, I was often a bit disappointed when a character didn’t have a model. Some older characters even didn’t get one because nobody thought about them (for example, Yuki and the other characters from “Vampire Knight”). So I thought:
Hey — if others can do it, then you should be able to do it too!
The First Step
After getting a new graphics card and improving my settings, I finally decided to start creating models myself.
The first step was setting up kohya_ss and figuring out the right settings.
Repeats, Batch Size, and Epochs
Pretty quickly I realized I had to work with Dr. Google again, because there were so many things I didn’t know:
What’s the difference between steps, batch size, and epochs — and how do you configure them 🤔?
What values should I use for both learning rates?
Which optimizer should I pick?
Should I use optimizer parameters… or not?
So I started reading about all of it and created a simple test model using the AdamW optimizer without any parameters — and it actually turned out pretty decent.
Then I made a few beginner mistakes, like:
using trained models instead of base models
setting Network Dimension and Alpha way too high
After some experimenting, Adafactor with cosine turned out to be a good combination for me.
I also played around with epoch values and eventually found my sweet spot at around 750–1000 steps
(calculated as: images × repeats × epochs ÷ batch size).
For Illustrious I always use:
3 repeats
batch size 2
So to simplify it, the formula becomes:
x × 3 × y ÷ 2
The Way of Learning
Unfortunately, I had to learn that good training settings alone aren’t enough —
training images and captioning are just as important.
The Blurry Yuki
One of my first models was Yuki Cross.
And while the results weren’t bad, something bothered me:
The images were kind of blurry — the quality just wasn’t great.
No surprise — many of the anime series I made LoRAs for are quite old, and the available video quality was only 480–720p.
So I started using an upscaler, and since then the models have turned out noticeably better.
I was satisfied — for the moment.
The Right Character Tag
At some point another user reached out to me and asked why I wasn’t using the existing Danbooru character tags.
Turns out that using them actually helps the model reproduce the character more accurately.
Chizuru and ControlNet
There were still some characters nobody had created models for, simply because they don’t have an anime adaptation.
And there was one character I wanted a model of more than any other:
Chizuru from the manga Juujika no Rokunin.
So I started looking for ways to colorize manga panels and eventually found a great solution: ControlNet.
I even made a full tutorial for it — if you want to know how to use it, feel free to check it out.
Captioning - more important than you think
Around that time, I also discovered another very important factor:
proper captioning.
If you simply auto-caption your images without reviewing the generated tags, the model won’t turn out as good as it could.
Sometimes WD14 captioning is inaccurate and adds tags that aren’t actually present in the image.
However, those tags are still used during training — which can cause the model to generate things it shouldn’t.
So I began reviewing my captions, created a large blacklist of disallowed tags — and once again, the models improved.
Noise and Influence
But on my second manga model — Momoki (also from Juujika no Rokunin) — something strange happened.
The model kept generating weird blurry edges.
Eventually I realized why:
Nearly every training image contained those exact blurry edges.
And here’s the lesson:
If something keeps appearing repeatedly in your training images, the model will learn it — even if it’s on your caption blacklist — because its visual influence is too strong.
This new insight also helped me identify problems in models I had already released.
Take Renard, for example:
Renard has bandaged eyes in every single training image —
so you simply cannot generate the character without them.
At that point I started using AI to analyze and improve my workflow and realized that, especially with characters like this, I need to:
diversify the character’s appearance
reduce visual noise (such as background characters)
to achieve a more consistent and flexible model.
Cleaning Up My Own Mess
So I decided to create some Version 2.0 models of certain characters — with fewer training images and more controlled appearances — by using images generated from Version 1.0 as training data to build a more consistent model.
That’s where I made my last important discovery:
Too many extensions don’t help — they can even ruin a character when you’re creating a LoRA.
I also wrote a full article about this — feel free to check it out if you’re interested.
What’s Next?
Write what you want
Sounds simple, right?
There are many reasons you might want to reach out to me:
Model requests (of course)
Update requests (for models I already released, but which didn’t benefit from my newer knowledge)
Ideas and suggestions (for improvements, workflows, etc.)
Feedback (if you don’t like something I post)
Bug reports (for example: “13 — Scissors Seven”)
Request Board — Yes or No?
I have an idea in mind:
I’m thinking about writing an article where you can post your model requests in the comments.
What do you think about it?
Would you use it?
Or do you prefer sending requests via PM or comments like before?
Please let me know 🙂

.jpeg)