Sign In

Animagine XL V3

5.7k
55k
223k
393
0
Updated: Feb 3, 2024
base modelanime
Verified:
SafeTensor
Type
Checkpoint Trained
Stats
53,992
224,945
Uploaded
Jan 10, 2024
Base Model
SDXL 1.0
Training
Steps: 132,590
Epochs: 10
Hash
AutoV2
1449E5B0B9
0
0
0
0
0

ANIMAGINE XL 3.0

Huggingface link: https://huggingface.co/cagliostrolab/animagine-xl-3.0

Gradio Demo : https://huggingface.co/spaces/Linaqruf/animagine-xl

Official Blog Release: https://cagliostrolab.net/posts/animagine-xl-v3-release

Support us at: https://ko-fi.com/linaqruf

Overview

Animagine XL 3.0 is the latest version of the sophisticated open-source anime text-to-image model, building upon the capabilities of its predecessor, Animagine XL 2.0. Developed based on Stable Diffusion XL, this iteration boasts superior image generation with notable improvements in hand anatomy, efficient tag ordering, and enhanced knowledge about anime concepts. Unlike the previous iteration, we focused to make the model learn concepts rather than aesthetic.

Animagine XL 3.0 was trained on a 2x A100 GPU with 80GB memory for 21 days or over 500 gpu hours. For further information please visit our official blog or huggingface repository.

Model Details

  • Developed by: Cagliostro Research Lab

  • Model type: Diffusion-based text-to-image generative model

  • Model Description: Animagine XL 3.0 is engineered to generate high-quality anime images from textual prompts. It features enhanced hand anatomy, better concept understanding, and prompt interpretation, making it the most advanced model in its series.

  • License: Fair AI Public License 1.0-SD

  • Finetuned from model: Animagine XL 2.0

Usage Guidelines

Tag Ordering

Prompting is a bit different in this iteration, for optimal results, it's recommended to follow the structured prompt template because we train the model like this:

1girl/1boy, character name, from what series, everything else in any order.

Special Tags

Like the previous iteration, this model was trained with some special tags to steer the result toward quality, rating and when the posts was created. The model can still do the job without these special tags, but it’s recommended to use them if we want to make the model easier to handle.

Quality Modifiers

Rating Modifiers

Year Modifier

These tags help to steer the result toward modern or vintage anime art styles, ranging from newest to oldest.

Recommended settings

To guide the model towards generating high-aesthetic images, use negative prompts like:

nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name

For higher quality outcomes, prepend prompts with:

masterpiece, best quality

However, be careful to use masterpiece, best quality because many high-scored datasets are NSFW. It’s better to add nsfw, rating: sensitive to the negative prompt and rating: general to the positive prompt. it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler.

License

Animagine XL 3.0 now uses the Fair AI Public License 1.0-SD, compatible with Stable Diffusion models. Key points:

  1. Modification Sharing: If you modify Animagine XL 3.0, you must share both your changes and the original license.

  2. Source Code Accessibility: If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too.

  3. Distribution Terms: Any distribution must be under this license or another with similar rules.

  4. Compliance: Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values.

The choice of this license aims to keep Animagine XL 3.0 open and modifiable, aligning with open source community spirit. It protects contributors and users, encouraging a collaborative, ethical open-source community. This ensures the model not only benefits from communal input but also respects open-source development freedoms.