Note: My previous article was removed from Reddit community r/StableDiffusion because it was re-written by ChatGPT. So I decided to write in my own way I just want to mention that English is not my native language so if there is any kind of mistakes I apologies in advance. I will try my best to explain what I have learnt so far in this article.
So after my last experiment which you can find here have decided to train a lower resolution models below are the settings I used to train two more models I wanted to test if we can get the same high quality detailed images training on lower resolution:
Model 1:
· Model Resolution: 512x512
· Number of Image’s used: 4
· Number of tiles: 649
· Batch Size: 8
· Number of epochs: 80 (but stopped the training at epoch 57)
Speed was pretty good on my under volt and under clocked RTX 3090 14.76s/it on batch size 8 so its like 1.84s/it on batch size one. (Please attached resource zip file for more sample images and config files for more detail)
Model was heavily over trained on epoch 57 and most of the generated images have plastic skin and resemblance is hit and misses, I think it’s due to training on just 4 images and also need better prompting. I have attached all the images in the resource zip file. But over all I am impressing with the tiled approach as even if you train on low res still model have the ability to learn all the fine details.
Model 2:
Model Resolution: 384x384 (Initially tried with 256x256 resolution but there was not much speed boost or much difference in vram usage)
· Number of Image’s used: 53
· Number of tiles: 5400
· Batch Size: 16
· Number of epochs: 80 (I have stopped it at epoch 8 to test the model and included the generated images in the zip file, I will upload more images once I will train this model to epoch 40)
Generated images with this model at epoch 8 look promising.
In both experiments, I learned that we can train very high-resolution images with extreme detail and resemblance without requiring a large amount of VRAM. The only downside of this approach is that training takes a long time.
I still need to find the optimal number of epochs before moving on to a very large dataset, but so far, the results look promising.
Thanks for reading this. I am really interested in your thoughts; if you have any advice or ideas on how I can improve this approach, please comment below. Your feedback helps me learn more, so thanks in advance.
Links:
For tile generation: Tilling Script