Type | |
Stats | 122 0 |
Reviews | (5) |
Published | Feb 28, 2024 |
Base Model | |
Training | Steps: 9,800 |
Trigger Words | Timothee_Chalamet_512v1-9800 |
Hash | AutoV2 E6C86AAD02 |
Pre Training
I gathered 21 HD images of Timothee Chalamet. I am only using Birme to crop the HD photos instead of using Faceswap to align the faces. None of these images were full body, but some of them were zoomed out. These are 512x512 images instead of 1024x1024 images because I don’t have the specs to train a 1024x1024 model. I used Blip captioning to generate the filewords and edited each individually to reduce potential hallucinations.
Training
I used 0.005:100:0.0025:250,0.001:500,0.0005:1000,0.00025 for my learning rate. I am going for 10K training steps total. I am using a batch size of 1 with Gradient Accumulation Steps set to 3. For the embedding I am using 6 vectors per token. I switched to SD 1.5 EMA Only model for training.
Things that I could have done better
I could have upscaled the images before extracting the faces so I could reduce blur.