Sign In

AingDiffusion XL

604
6.4k
46k
121
Updated: May 20, 2024
base modelanimewomangirls
Verified:
SafeTensor
What did you think of this resource?
Type
Checkpoint Merge
Stats
681
Reviews
Published
Mar 19, 2024
Base Model
SDXL 1.0
Hash
AutoV2
1340531B99

Read the user guide below!

[HuggingFace Backup link]

AingDiffusion XL (read: Ah-eeng Diffusion XL) merges anime models based on SDXL with some extra self-trained datasets. This model is capable of generating high-quality anime images.

Guide to generating good images with this model

(This guide is relevant to the latest version of AingDiffusion XL)

  • This model uses Animagine's way of prompting:

    1girl/1boy, character name, from what series, what style, everything else in any order, masterpiece, best quality, very aesthetic, absurdres

    It is also recommended to include the style name of the images you want to generate in the prompt (eg. anime screencap, sketch, pixel art, famous artist name, etc.).

    The standard negative prompt recommended for this model:

    nsfw, lowres, (bad), text, error, fewer, extra, missing, worst quality, jpeg artifacts, low quality, watermark, unfinished, displeasing, oldest, early, chromatic aberration, signature, watermark, artistic error, username, scan, [abstract], realistic, 3d,
  • Use SDXL-recommended resolutions. This is SDXL, SD 1.5 resolution always performs worse.

  • The recommended sampler is "Euler a" with ~7 CFG and ~25 steps.

  • Set ENSD (eta noise seed delta) to 31337 [to replicate image].

Character List

AingDiffusion XL is a derivative of Animagine XL v3. The latest version of AingDiffusion XL is a derivative of Animagine XL 3.1, which has over 4.9k characters trained on it. You can see the full list here: https://huggingface.co/spaces/cagliostrolab/animagine-xl-3.1/blob/main/wildcard/characterfull.txt

AingDiffusion XL v1.3 Nerd Facts

AingDiffusion XL v1.3 uses a new dataset tagging method to improve generation quality and fine-tune the quality tags more. This is the best AingDiffusion XL version I’ve ever trained (definitely not an Apple reference).

 

Finetuning Layer Details

- Dataset count: 41,671 images

                - Masterpiece-rated: 5,814

                - Best Quality-rated: 3,155

                - Normal Quality-rated: 5,353

                - Low Quality-rated: 5,790

                - Worst Quality-rated: 21,559

- GPU: 1 x RTX 4090 (36 hours of training)

- Batch size: 12

- Gradient Accumulation: 3

- Equivalent batch size: 36

- Epoch: 10

- Used epoch: 10

 

- LyCORIS Network: LoKr

- Network Dim: 32

- Network Alpha: 16

- Conv Dim: 16

- Conv Alpha: 8

NOTE: Dim settings used is to trigger full matrix mode on LoKr, not actually using mentioned Dim and Alpha.

- Optimizer: AdamW

- LR Scheduler: Cosine annealing warmup restarts

- Restarts: 10

- Min. LR: 1e-06

- Unet LR: 1e-05

- Text Encoder LR: 1e-05

- Min SNR gamma: 5

- Other optimizer arguments:

  - Betas: 0.9, 0.99

  - Weight_decay: 0.1

 

This LyCORIS was then merged with AingBase on full weight.

Quality tags ranking system is inspired by CagliostroLab’s scoring system.

FAQ

Q: There's nothing special about this model, it didn't have something unique.

A: The goal of AingDiffusion XL right now is to enhance the experience and the capabilities of Animagine. The goal is basically to make Animagine on steroids.

Q: The images I generate are bad and not as good as yours.

A: First, make sure you follow the standard prompting guide I provide. Also, sometimes I found people using SD 1.5 embeddings with this model. DO NOT USE THEM. THEY DON'T WORK WITH SDXL.

Legals

Due to the usage of Animagine XL with the merge, the model is now using Fair AI Public License 1.0-SD license.

Because of some reason, I decided to make the merging recipe clearer which I will put in the "About Version" section starting from version v0.6.