Sign In

Kirazuri Lazuli (Noobai V-Pred)

85

423

848

24

Verified:

SafeTensor

Type

Checkpoint Trained

Stats

117

97

166

Generation

Reviews

Published

Jul 3, 2025

Base Model

NoobAI

Training

Steps: 352,600
Epochs: 50

Trigger Words

masterpiece, best quality, very aesthetic

Hash

AutoV2
8115C68633
Supporter Badge March 2024
motimalu's Avatar

motimalu

Kirazuri Lazuli (Noobai V-Pred)

This checkpoint is a personal project trained locally from NoobAI-XL (NAI-XL) V-Pred 1.0-Version on a 4090 with a small dataset of 14,065 total images.

It focuses on adding additional knowledge since the data cutoff of the base model (2024/10/24), including styles, concepts, and characters from anime, video games, and virtual youtubers.

Usage - Important

This model is trained from NoobAI-XL (NAI-XL) V-Pred 1.0-Version, which is implemented as a v-prediction model (distinct from eps-prediction), it requires specific parameter configurations.

It recommended to familiarize with the base model and its usage instructions when using this checkpoint.

The intention when training this checkpoint is mostly to extend the base models knowledge without significantly changing the usage or degrading the existing knowledge.


What follows are my personal settings, your preferred settings for the base model should be mostly transferrable.

For samplers, Euler for generation, Euler Ancestral for upscaling/inpainting.

(⚠️Other samplers may not work, including some CivitAI defaults such a Karras.)

Previews are generated with a ComfyUI workflow using DynamicThresholdingFull, Upscaling, and FaceDetailer.

DynamicThresholding (CFG-Fix) settings used with a CFG of 10:

dynthres_enabled: True, dynthres_mimic_scale: 7, dynthres_threshold_percentile: 1, dynthres_mimic_mode: Half Cosine Down, dynthres_mimic_scale_min: 1, dynthres_cfg_mode: Half Cosine Down, dynthres_cfg_scale_min: 3, dynthres_sched_val: 1, dynthres_separate_feature_channels: enable, dynthres_scaling_startpoint: ZERO, dynthres_variability_measure: STD, dynthres_interpolate_phi: 1

reForge or Forge should also be usable as resolved from version 1.0 (apologies if you ran into issues with that version).

*To be automatically detected as a v-pred model in Forge/reForge, added znstr and v_pred keys to the state dict of the model using this script.

Quality modifiers masterpiece, best quality, very aesthetic should be positioned at the end of the prompt.

Artist names can be prefixed with artist: to prevent token bleeding with artist names and concepts.

A1111 schedule prompting syntax is used in ComfyUI through the comfyui-prompt-control extension to combine artist styles, i.e: artist:[artist1|artist2|artist3]

In some cases Regional Prompting is used with Attention Couple (example).

Positive prompt:

{{characters}}, {{copywrites}}, {{artists}},
{{tags}},
absurdres, masterpiece, best quality, very aesthetic

Training details

The kohya-ss/sd-scripts training configs used can be found on github.

v2.0

This version now has a much better representation of all the characters, concepts, and styles I hoped to train for this checkpoint.

Single training run on the full dataset, expanded with more recent data:

  • Dataset cutoff: 2025/06/13

  • Training images: 14,065

  • Regularization images: 7056 (Generated from NoobAI-XL (NAI-XL) V-Pred 1.0-Version)

  • Optimizer: Adafactor

  • Training precision: Full-fat fp32

  • Batch size: 4

  • U-Net LR: 6e-6

  • TE LR: 2e-6

  • Epochs: 50

  • Steps: 352,600 (~344 GPU hours at 3.52s/it)

v.1.1

Iterative checkpoint training approach inspired by PixelWave.

This involved training in dataset batches of ~1200 images, for 10 training sessions, before finishing with an 11th aesthetic finetune dataset of 267 images.

  • Dataset cutoff: 2025/05/25

  • Adafactor optimizer

  • Full-fat fp32 training precision

  • Batch size and LR were adjusted multiple times

    • Batch size 4, LR 6e-6 seemed most stable

  • TE trained for the 10th and 11th training sessions at Batch size 4, LR 2e-6

  • Regularization dataset generated from the 10th checkpoint used in the final aesthetic training to preserve the previously learned characters

⏳ Will share more previews for the trained concepts below as I have time to test them.

List of new series/characters trained:

anime:

  • dandadan

  • girumasu

  • gundam gquuuuuux

  • solo leveling

  • witch watch

  • kusuriya no hitorigoto

video-games:

  • elden ring nightreign

  • metaphor: refantazio

  • monster hunter wilds

  • fate/go (lilith)

  • genshin impact (citlali, escoffier, lan-yan, varesa, xilonen, yumemizuki mizuki)

  • honkai star rail (aglaea, castorice, cipher)

  • wuthering waves (carlotta, cartethyia, chisa, ciaccona, zani)

  • zenless zone zero (astra-zao, cipher, ju-fufu, luciana de montefio, pulchra fellini, sweety, trigger, vivian-banshee, yi xuan)

hololive:

  • flow glow (isaki riona, kikirara vivi, koganei niko, mizumiya su, rindo chihaya)

  • hoshimachi suisei (11th, caramel-pain, kireigoto, spectra-of-nova, supernova)

  • himemori luna (7th)

  • houshou marine (ahoy pirates)

  • natsuiro matsuri (jersey maid)

  • nekomata okayu (personya respect)

  • ookami mio (8th)

  • oozora subaru (police)

  • roboco san (oriental)

  • shirakami fubuki (fbkingdom)

  • usada-pekora (10th)

indie v-tubers:

  • amagai ruka

  • dooby

  • nimi nightmate

  • yuuki sakuna

other:

  • myaku-myaku (expo2025)

List of concepts trained:

clothing:

  • ancient greek clothes

  • chronopattern dress

  • jirai kei

  • water dress

concepts:

  • fourth wall

  • star trail

  • flower field

  • mechabare

  • monster girl

  • year of the snake

Some intentionally tagged/curated style triggers, from 103 artist datasets:

  • blending

  • flat color

  • no lineart

  • impasto

  • painterly

  • chiaroscuro

  • impressionism

  • ink wash painting

  • pastel colors

  • pencil art

  • neon palette

  • dark

  • colorful

Traditional media group tags are also trained:

(some not supported by enough data)

  • traditional media

  • acrylic paint \(medium\)

  • airbrush \(medium\)

  • ballpoint_pen \(medium\)

  • brush \(medium\)

  • chalk \(medium\)

  • calligraphy_brush \(medium\)

  • canvas \(medium\)

  • charcoal \(medium\)

  • colored_pencil \(medium\)

  • color ink \(medium\)

  • coupy pencil \(medium\)

  • crayon \(medium\)

  • gouache \(medium\)

  • graphite \(medium\)

  • ink \(medium\)

  • marker \(medium\)

  • millipen \(medium\)

  • nib pen \(medium\)

  • oil painting \(medium\)

  • painting \(medium\)

  • pastel \(medium\)

  • photo \(medium\)

  • tempera \(medium\)

  • watercolor \(medium\)

Recognitions

Thanks to Laxhar Lab for the NoobAI-XL (NAI-XL) V-Pred 1.0-Version base model.

Thanks to narugo1992 and the deepghs team for open-sourcing various training sets, image processing tools, and models.

Thanks to kohya-ss for the sd-scripts trainer.

License

No modifications are made to the base model Noobai License, which is as follows:


This model's license inherits from https://huggingface.co/OnomaAIResearch/Illustrious-xl-early-release-v0 fair-ai-public-license-1.0-sd and adds the following terms. Any use of this model and its variants is bound by this license.

I. Usage Restrictions

  • Prohibited use for harmful, malicious, or illegal activities, including but not limited to harassment, threats, and spreading misinformation.

  • Prohibited generation of unethical or offensive content.

  • Prohibited violation of laws and regulations in the user's jurisdiction.

II. Commercial Prohibition

We prohibit any form of commercialization, including but not limited to monetization or commercial use of the model, derivative models, or model-generated products.

III. Open Source Community

To foster a thriving open-source community, users MUST comply with the following requirements:

  • Open source derivative models, merged models, LoRAs, and products based on the above models.

  • Share work details such as synthesis formulas, prompts, and workflows.

  • Follow the fair-ai-public-license to ensure derivative works remain open source.

IV. Disclaimer

Generated models may produce unexpected or harmful outputs. Users must assume all risks and potential consequences of usage.