Serizawa Momoka (Tokyo 7th Sisters) | Guide: Mask, Don't Negative Prompt: Dealing with Undesirable Parts of Training Images
Type | |
Stats | 415 |
Reviews | (67) |
Published | May 17, 2023 |
Base Model | |
Training | Steps: 750 Epochs: 50 |
Trigger Words | Serizawa Momoka |
Hash | AutoV2 16C6E0EFF5 |
(It would be greatly appreciated if someone can point to me a clean source of Tokyo 7th Sisters assets. I don't really want to scrape Twitter or reverse the game API.)
Mask, Don't Negative Prompt: Dealing with Undesirable Parts of Training Images
Introduction
Training images aren't always clean. Sometimes, when training for a given target, unrelated parts in images such as text, frames, or watermarks will also be learned by the model. There are several strategies that can be applied to this problem, each with shortcomings:
Cropping: Leave out undesired parts. Modifies source composition, not applicable in some cases.
Inpainting: Preprocess the data and replace undesirable parts with generated pixels. Requires a good inpainting prompt / model.
Negative Prompts: Train as is and add negative prompts when generating new images. Requires the model to know how the undesirable parts map to the prompt.
Another simple strategy is effective:
Masking: Multiply the loss with a predefined mask.
This method is not new, but the most popular LoRA training script has yet to have built-in support for it.
Experiment
60 images with card text and decorations of Serizwa Momoka from Tokyo 7th Sisters were used.
A masked LoRA and an plain unmasked LoRA were trained.
For the masked version, a mask was drawn using image editing software over source images. Note that since the VAE has a 8x scaling factor, what seen by the model is the 8x8 pixelated version. Tags that describe the parts masked away were removed.
Results
(see preview images)
Future work
Auto generation of masks with segmentation models