Sign In

7th anime XL B

321
2.3k
104
Updated: Apr 7, 2024
base model
Verified:
SafeTensor
Type
Checkpoint Merge
Stats
2,251
Reviews
Published
Apr 7, 2024
Base Model
SDXL 1.0
Hash
AutoV2
4EBD04BE55
default creator card background decoration
syaimu's Avatar
syaimu
  • Conducted highly complex model merging and additional training.

  • Adjusted to recent artistic styles.

  • Reduced anatomical inconsistencies.

  • Enhanced resilience to additional training and LoRA.


<lyco:Important Notice:1.37>

default CFG Scale : 7

default Sampler : DPM++ 2M Karras

default Steps : 20

Negative prompt : (worst quality:1.6),(low quality:1.4),(normal quality:1.2),lowres,jpeg artifacts,long neck,long body,bad anatomy,bad hands,text,error,missing fingers,extra digit,fewer digits,cropped,signature,watermark,username,artist name,

<Marge: The Recipe :0.7>

  1. Merge Animagine 3.0 and 3.1 using a base alpha of 0.49, and merge layers from IN00 to OUT11 at 0.82.

  2. Train a model on sd_xl_base_1.0_0.9vae.safetensors with ~4.6 million images on A100x4, at a learning rate of 1e-5, for ~2 epochs, and then, for compatibility with Animagine models' CLIP modules, further train it with a dataset of 164 AI-generated images to refine CLIP and Unet, using PRODIGY, Initial D at 1e-06, D Coefficient at 0.9, and a batch size of 4 for 1500 steps.

  3. Merge 1. and 2. using two sets of coefficients:

    • Set 1: 0.2, 0.6, 0.8, 0.9, 0.0, 0.8, 0.4, 1.0, 0.7, 0.9, 0.3, 0.1, 0.1, 0.5, 0.6, 0.0, 1.0, 0.6, 0.5, 0.5

    • Set 2: 0.9, 0.8, 0.6, 0.3, 0.9, 0.1, 0.4, 0.7, 0.4, 0.6, 0.2, 0.3, 0.0, 0.8, 0.3, 0.7, 0.7, 0.8, 0.2, 0.3.

  4. Merge Set 1 and Set 2 using a base alpha of 0.79 and merge layers from IN00 to OUT11 at 0.73 to create Set 3.

  5. Train a LoRA based on Set 3 with a curated dataset of 12,018 AI-generated images, Lion optimizer, batch size of 4, gradient accumulation steps of 16, and learning rate 3e-5 for 4 epochs. This model is then blended into Set 3 itself at a strength of 0.2, resulting in the creation of 7th anime B.

  6. Train another LoRA based on 7th anime B with the same dataset as described in step 2. but with Lion optimizer, batch size of 4, lr_scheduler_num_cycles at 5, and learning rate 1e-5 for 80 epochs. This model is then blended into 7th anime B itself at a strength of 0.366, finally resulting in the creation of 7th anime A.