Sign In

Serizawa Momoka (Tokyo 7th Sisters) | Guide: Mask, Don't Negative Prompt: Dealing with Undesirable Parts of Training Images

67
415
3
Verified:
SafeTensor
Type
LoRA
Stats
415
Reviews
Published
May 17, 2023
Base Model
SD 1.5
Training
Steps: 750
Epochs: 50
Trigger Words
Serizawa Momoka
Hash
AutoV2
16C6E0EFF5
default creator card background decoration
gustproof's Avatar
gustproof

(It would be greatly appreciated if someone can point to me a clean source of Tokyo 7th Sisters assets. I don't really want to scrape Twitter or reverse the game API.)

Mask, Don't Negative Prompt: Dealing with Undesirable Parts of Training Images

Introduction

Training images aren't always clean. Sometimes, when training for a given target, unrelated parts in images such as text, frames, or watermarks will also be learned by the model. There are several strategies that can be applied to this problem, each with shortcomings:

  1. Cropping: Leave out undesired parts. Modifies source composition, not applicable in some cases.

  2. Inpainting: Preprocess the data and replace undesirable parts with generated pixels. Requires a good inpainting prompt / model.

  3. Negative Prompts: Train as is and add negative prompts when generating new images. Requires the model to know how the undesirable parts map to the prompt.

Another simple strategy is effective:

  1. Masking: Multiply the loss with a predefined mask.

This method is not new, but the most popular LoRA training script has yet to have built-in support for it.

Experiment

60 images with card text and decorations of Serizwa Momoka from Tokyo 7th Sisters were used.

A masked LoRA and an plain unmasked LoRA were trained.

For the masked version, a mask was drawn using image editing software over source images. Note that since the VAE has a 8x scaling factor, what seen by the model is the 8x8 pixelated version. Tags that describe the parts masked away were removed.

Results

(see preview images)

Future work

  • Auto generation of masks with segmentation models