Sign In

Seele-NoobAI-SDXL

109

1.3k

38

Updated: Aug 26, 2025

style

Verified:

SafeTensor

Type

Checkpoint Trained

Stats

309

0

Reviews

Published

Aug 25, 2025

Base Model

NoobAI

Hash

AutoV2
A1134C4AAB
default creator card background decoration
WA

waw1w1

Getting Started with Seele-NoobAI-SDXL 2.0: A User's Guide 

1.Overview  

"Fine-tuned from the noobai NOOBAI XL-VPred 1.0 model, Seele-NoobAI-SDXL 2.0 marks a significant evolution. The key distinction from v1.0 and its derivatives is that 2.0 has moved beyond being just a stylization model.

This new version natively supports resolutions up to 2048x2048. In fact, all preview images on this page were generated directly at this high resolution, with zero inpainting or upscaling. If you'd like to see the workflow, you can simply download the images and load them directly into ComfyUI to extract the metadata.

Unlike the singular stylistic focus of v1.0, version 2.0 is not confined to a specific aesthetic. Much like the original noobai, it offers an excellent response to prompts, especially to artist tags. Through specialized dataset curation, I've ensured that the stylistic output from these tags remains highly consistent with the original noobai. This means you can use your artist tag strings from noobai and expect nearly identical results.

For those interested, I'll be sharing my training methodology at the end of this post. Now, allow me to formally introduce Seele-NoobAI-SDXL 2.0: a model built for generating stunning, high-resolution images right out of the box!"

2.Prompts

Add these tags at the end of your prompt:

masterpiece, best quality, newest, absurdres, highres, high resolution,

Recommended Negative Prompt:

worst quality, normal quality, old, early, low quality, lowres, text, word, text watermark,

3、Parameters Settings

cfg:4.5—5.5

In a departure from our recommendation for noobai, it's advised to keep Rescale CFG disabled by default for v2.0. However, if you encounter color artifacts during generation, you can start experimenting with a value of 0.5. (Important: Only adjust this value downwards from 0.5; do not increase it.)

Sampler:euler/euler a

Additionally, the euler_ancestral_cfg_pp and euler_cfg_pp samplers are both excellent choices and highly recommended.

scheduler:sgm_uniform

For the scheduler, sgm_uniform is the top recommendation. Good alternatives include simple and normal. While we generally advise against using karras, you're welcome to experiment with it to see if it suits your specific needs.

step:28—40

Resolution:

768x1344、832x1216、896x1152、1024x1024、1152x896、1216x832、1344x768、1024x1536、1536x1024、1536x1536、1536x2048、2048x1536、2048x2048、1920x1080、1080x1920

In summary, you can confidently generate images at any resolution where the total area falls between 1024x1024 and 2048x2048. Theoretically, the model should support resolutions as low as 512x512, though I haven't tested this myself. You are encouraged to run your own tests to see what's possible.

Finally, to wrap up the parameter settings, I want to highlight an advanced option for those looking to push their results further: the euler_smea_dy sampler.

This is an exceptional sampler, specifically optimized for native high-resolution generation and achieving crisp, clean lines. However, there is a crucial caveat when using a two-pass workflow (i.e., generating at a low resolution like 1024x1024 and then upscaling). For the upscaling pass, you must set the scheduler to exponential. Failing to do so will cause the background of your image to become overly smooth and lose detail.

That said, if you're generating directly at high resolutions (e.g., 2048x1536), I consider this sampler to be nearly flawless. Here's the link to the author's GitHub for those interested:Koishi-Star/Euler-Smea-Dyn-Sampler: A sampler base on Euler, aim at generating better picture/一种基于Euler的采样方法,旨在生成更好的图片

4.Epilogue 

I truly hope you enjoy creating with Seele-NoobAI-SDXL 2.0. If you generate any images you're proud of, I would be thrilled to see them posted in the Gallery.

Seeing what the community creates is the single biggest source of encouragement and motivation for me to continue developing and improving this project. Thank you for your support!

5.Training Approach

The high-resolution generation in v2.0 was very much an experimental endeavor for me. Although this capability has been implemented in models like Illustrious XL 1.0 and its successors, it was completely uncharted territory in my own training process.

Along the way, I ran into a cascade of issues, including: corrupted outputs at low resolutions, persistent artifacts at high resolutions, catastrophic failures where the model couldn't produce any coherent images at all, and major compatibility conflicts with the vast ecosystem of community LoRAs trained on 1024x1024.

First and foremost, I was up against a significant financial hurdle. As any model trainer knows, the ideal approach is to include as many concepts as possible, which means training on a massive number of images. The go-to source for the community is often Danbooru, but training on the full Danbooru dataset at 2048 resolution was financially out of reach for me. My budget could only accommodate a curated set of a few tens of thousands of images for 2048px training.

So, the central question became: how could I maximize the model's effectiveness with such a limited dataset? My strategy was to filter the danbooru2024 dataset by time and ID, ensuring the selection spanned the full spectrum of styles from across Danbooru's history. Fortunately, noobai is an exceptionally robust base model. My tests (conducted at 1024px) confirmed that this filtering method had a negligible impact on noobai's original generalization capabilities, successfully preserving its powerful versatility and prompt responsiveness!

Next, I tackled the second major challenge: the 'how' of high-resolution training. My initial approach of using only high-resolution images proved catastrophic for low-resolution generation. The model's ability to handle lower resolutions was completely degraded, producing outputs riddled with noise and mangled anatomy. My next experiment involved mixing in low-resolution images to preserve that capability. At this point, a friend raised a critical question: was it better to use the same set of images at both high and low resolutions, or a mixed dataset where different images exist at different resolutions?

This time, I got lucky. A single experiment yielded a clear conclusion: a mixed dataset with varied images at different resolutions significantly boosted the model's generalization. My hypothesis is that training on the same small set of images at different resolutions is akin to over-fitting on that limited set, which naturally degrades the model's versatility. Finally, I adopted the multi-stage training strategy mentioned in the Illustrious XL paper: I first trained the model on a dataset that included 1536px images, then moved on to a final pass with a dataset containing 2048px images.

6.others

datasets:deepghs/danbooru2024 · Hugging Face 上的数据集 --- deepghs/danbooru2024 · Datasets at Hugging Face

resources:Illustrious:一个开放的高级插画模型 --- Illustrious: an Open Advanced Illustration Model

Special thanks to @Anlia for their invaluable guidance on the training methodology.

My sincere thanks to @Des_Moines_prpr for their assistance with the model testing phase.

Thanks to @Vecthral for the guidance provided during the training.

1.Overview   

Based on noobai V prediction training.

Clean character lines, rich detail in the background, and excellent light and shadow; these are the strengths of my model!

Of course, I'm also training it on a large nsfw training set, and if you want to generate nsfw content, I don't think Seele-NoobAI-SDXL will disappoint you.

2.Quality Enhancement Tags                   

Add these tags at the end of your prompt:

masterpiece,best quality,newest,absurdres,highres,

3.Recommended Negative Prompt   

worst quality,normal quality,old,early,low quality,lowres,signature,username,logo,bad hands,mutated hands,mammal,anthro,furry,ambiguous form,feral,semi-anthro,watermark,text,petals,light particle,

4.Suggestions for use   

Sampler:euler/euler a

cfg:4.5-5.5

scheduler:ddim_uniform/karras

steps:30-40

The above are all suggestions, you can change to the rest of the attempts by yourself during actual use!As with Noobai, it is highly recommended that you use Rescale CFG .

Finally, it is highly recommended that you zoom in at least once at 1.5x so that the background is very clear!

5.Epilogue 

I trained this model with the intention of better character lines and rich backgrounds, and at the end of the model training, I added a new light and shadow training set on a whim, so the Seele-NoobAI-SDXL you see now has a decent light and shadow.

I hope you guys like this model, and if you guys can post more pictures in the gallery, I'm sure I'll be even more motivated!

Changelog 

v2.0

Enhancing the model's high-resolution generation capabilities.

v1.2

I'm sorry, v1.1 incorrectly uploaded a very heavily overfitted version when I uploaded, now v1.2 fixes this and I adjust the training set so that the background will no longer be cluttered.

Attention, this version adds new negative Prompt!

In the meantime, I'm experimenting with a training method that will surprise a lot of people, and maybe the models trained on this method won't be updated immediately in the next update, but there will be one in the near future!

v1.1

Retraining based on the 1.0 training set has eliminated catastrophic forgetting in some movements, and I am very satisfied with the v1.1 version, and future updates will be adjusted based on v1.1.