Sign In

⚙️ [Updated 26-02-2026] How I Generate My Images (And Why I Use These Settings)

17

⚙️ [Updated 26-02-2026] How I Generate My Images (And Why I Use These Settings)

[Updated 26-02-2026] How I Generate My Images

This article is for anyone curious about how I do my generations, why I use the settings I do, and how you can tweak them to work for your setup. Whether you're using the same LoRA or just browsing around, here’s a full breakdown of my generation method — including my specs, model choices, settings, and extra tools I use to survive this journey on a not-so-beefy rig. 😅

I actually made this article mainly for myself — to have a solid reference — but I also wanted to give others a better grasp of how I generate my images, rather than just dropping simple “generation info” under each post.


🖥️ My PC Specs (So You Understand Why I Do What I Do)

  • CPU: AMD Ryzen 5 5600X

  • GPU: GTX 1650 4GB VRAM

  • RAM: 16 GB

  • OS: Windows 10 Pro x64

I use Stable Diffusion WebUI Forge instead of A1111 — it handles VRAM better on my setup.

👉 Not much room to go crazy with 4K txt2img or big batch renders. My settings are built for stability and efficiency, with just enough VRAM to pull off quality gens using LoRAs and hires. fix.


🧪 Checkpoint & LoRA Stack

  • Checkpoint:

    • novaAnimeXL_ilV100.safetensors

    • Why I prefer it:

      1. It gives a cleaner anime output that matches my taste.

      2. My target is “as close as possible to an anime screenshot”.

      3. I follow the checkpoint’s recommended approach (Euler a + Automatic scheduler + 768×1344 base).

  • VAE: Automatic

  • Clip Skip: 2

  • LoRAs:

    • lora:susamix010-pony:0.6 (anime coloring) — Maybe it’s just me, but I feel like colors look better with this.

    • lora:AddMicroDetails_Illustrious:0.3 (surface detail boost) — Adds subtle sharpness to backgrounds and small elements. I'm not using it anymore. It's indeed can give good randomness details for my background, but sometimes it can "sabotage" my character details. And the newer version of novaAnimeXL can give better background.

💡 Sometimes I remove the trigger word of the extra Lora if the outfit turns messy or adds unnecessary details.


⚙️ My Generation Settings (Old Version)

~This section is here for comparison — these were my go-to defaults before 17-08-2025.

  • Sampling method: Euler a

  • Sampling steps: 25–30

  • CFG scale: 5–7

  • Base Resolution:

    • 512 × 768 – faster, stable for low VRAM

    • 1024 × 1024 – only when I’m away / background render

  • Hires. fix: ✅ Enabled

    • Denoising: 0.4

    • Hires steps: 20

    • Upscale by:

      • 2× if base is 512×768 → Final: 1024×1536

      • 1.5× if base is 1024×1024 → Final: 1536×1536

    • Upscaler:

      • Latent (WAI checkpoint)

      • R-ESRGAN 4x+ Anime6B (Nova / extra sharpness)

  • Seed: -1 (random unless I want variations)


UPDATE 10-08-2025 (Old Version)

🧠 Core Settings

  • Sampling method: DPM++ 2M (was Euler a before)

  • Schedule type: Karras

  • Sampling steps: 25–30

  • CFG Scale: 7 (still tweak between 5–7)

  • Seed: -1 (random each time — save if I want variations)

Hires. fix ✅ Enabled

  • Upscale by: 1.5

  • Hires steps: 15~20

  • Denoising strength: 0.5 (up from 0.4)

  • Hires sampling method: Same sampler (DPM++ 2M)

  • Hires schedule type: Same scheduler (Karras)

Upscaler Settings by Hardware

💻 My PC (GTX 1650)

  • Upscaler: Latent (nearest-exact)

  • Why: R-ESRGAN 4x+ Anime6B causes black images at the end on my hardware. Back when I ran Nova checkpoint, it still worked — but not anymore.

  • This Latent setup is tuned to mimic ESGRAN Anime’s look while avoiding the crash/black output issue.

🖥 Sister’s PC (High-end RTX)

  • Upscaler: R-ESRGAN 4x+ Anime6B

  • Handles heavy upscalers without problems, producing extra sharp and detailed results.

You can check my post to see the result that using this new setting.


UPDATE 17-08-2025 ( Old version)

🧠 Core Settings

  • Sampling method: euler a (was DPM++ 2M before)

  • Schedule type: Automatic(was Karras before)

  • Sampling steps: 30

  • CFG Scale: 7 (still tweak between 5–7)

  • Seed: -1 (random each time — save if I want variations)


Base Resolution

  • 512 × 768 → My “fast” option for quick results. (~6 minutes per image on my PC).

  • 768 × 1024→ My “high-quality” option when I can step away and let it run (~15 minutes per image on my PC).

💡 I still follow the same rule — only use higher res when I’m not in a rush.


Hires. fix ✅ Enabled

  • Upscale by: 2(up from 1.5)

  • Hires steps: 15

  • Denoising strength: 0.4 (down from 0.5)

  • Hires sampling method: Same sampler (Euler a)

  • Hires schedule type: Same scheduler (Automatic)


Upscaler Settings by Hardware

💻 My PC (GTX 1650)

  • Upscaler: R-ESRGAN 4x+ Anime6B

  • Why: I'm coming back using my old setting with a little tweak. I just find this upscaler work best with my taste.


Note: The following is my latest setup as of 26-02-2026. The strikethrough above shows my older defaults for comparison.


UPDATE 26-02-2026 – My Current Generation Setup

🛠️ Core Settings

  • Sampling method: euler a

  • Schedule type: Automatic

  • Sampling steps: 20

  • CFG Scale: 5 (still tweak between 4-7)

  • Seed: -1 (random each time — only reused seed if I want do another variation)


🖥️ Base Resolution

  • 768 × 1344→ My only go to now (~10-13 minutes per image on my PC).
    If you run the local UI and do the generation from the same PC, it probably gonna takes ~15 minutes if your PC like mine. But if you only run the UI in your app, but do the generation from your other device (your phone, your laptop, or another PC) then you can get that ~10 minutes time.
    That can happen due to browser takes a decent chuck of RAM usage while generating. So by using other device, you practically give extra RAM to use for your local app.


🚀 Hires. fix ✅ Enabled

Goal: I want Hires. fix results, but I keep it as light as possible to reduce render time while staying decent.

  • Upscale by: 1.5(down from 2)

  • Upscaler: R-ESRGAN 4x+ Anime6B

  • Hires steps: 15

  • Denoising strength: 0.3 (down from 0.4)

  • Hires sampling method: Same sampler (Euler a)

  • Hires schedule type: Same scheduler (Automatic)

Why this setup:

  • 1.5× upscale gives a noticeable quality bump without doubling the time too hard.

  • Denoise 0.3 keeps the image from getting “repainted” too much.

  • 15 Hires steps is my sweet spot for speed vs quality.

  • I'm coming back using my R-ESGRAN with a little tweak. I just find this upscaler work best with my taste.


🧠 When I Change Settings (My Real Tweak Rules)

A) Landscape images

  • Swap resolution: 1344 × 768

  • Steps: usually 25–30 (wide scenes tend to need more steps for composition)

B) Fighting scenes / busy action

  • Steps: 25–30 (more motion + more chaos = more chance to break at low steps)

C) If the prompt feels too strong (overcooked / weird shapes)

  • Lower CFG first: 5 → 4.5 → 4

  • If still weird: reduce prompt clutter (remove less important tags)

D) If the prompt won’t listen (I need to “force” it)

  • Increase CFG gradually (don’t jump straight to 7):
    5 → 5.5 → 6 → 6.5 → 7 (max)

  • If CFG 7 still fails, the prompt is usually conflicting, so I rewrite/simplify the prompt instead of forcing harder.


✅ My “No Extra Tools” Workflow (On Purpose)

I’m not using ControlNet, ADetailer, or any extra tools.
I want people to use my LoRA and still get good results with only:

  • a clean prompt,

  • my base settings,

  • and Hires. fix.

My actual workflow:

  1. Generate at 768×1344, Steps 20, CFG 5

  2. Keep Hires. fix ON (Anime6B, 1.5×, Hires steps 15, Denoise 0.3)

  3. If it’s close but not perfect:

    • adjust CFG slightly up/down

    • bump steps only when needed (especially landscape/action)

    • simplify the prompt if it gets messy

    • Try using additional Lora to give nice details


📓 Other Notes

  • Batch count / size: 1 × 1 . I never increase batch size above 1 since my PC are potato. Sometimes I increase batch count to 3 so I can get some variation to choose.

  • I avoid fixed seeds unless doing controlled variations.

  • If you want to have landscape image, just swap the width and height.


🔌 Extensions I Recommend

  • Infinite Image Browser – Browse outputs and reapply prompt combos.


✍️ Prompt Tips (LoRA-Specific)

  • If your image suddenly gets extra frills / random accessories / “too many add-ons”
    → Lower LoRA weight gradually (example: 1.0 → 0.9 → 0.8 → 0.7).
    → If it’s still stubborn, remove the LoRA trigger word (some triggers are strong and can overpaint).
    → You can also use selective negatives to suppress unwanted outfit parts (ex: frills, ribbon, lace).
    Note: don’t nuke your character tags in negative — focus on the specific detail you want to reduce.

  • CFG is basically your “how hard I force the prompt” knob
    → My safe default is CFG 5.
    → If the prompt won’t listen, raise slowly (5 → 5.5 → 6 → 6.5 → 7 max).
    → Too high CFG can cause weird anatomy, harsh edges, or “overcooked” textures, so I treat 7 as the ceiling.

  • Steps vs CFG (quick rule so you don’t waste time)
    → If the scene is simple, keep steps around 20 and adjust CFG first.
    → If it’s landscape or fighting/action, increase steps (25–30) before you crank CFG too high.
    Reason: complex composition often breaks from low steps more than from low CFG.

  • Keyword stacking is fine, but stacking random tags is how you get muddy results
    → Prioritize the “core” tags: subject + action + setting + lighting.
    → Add details only if you’d actually notice them missing.
    → If your prompt is long but results are inconsistent, remove 20–30% of the least important tags.

  • Letting the LoRA “overpaint” can be a feature (when you want outfit variations)
    → If you want surprise variations, keep LoRA weight a bit higher and reroll seeds.
    → If you want strict accuracy, keep LoRA weight moderate and use clearer outfit tags.

  • When the outfit is wrong, don’t instantly blame the LoRA
    → Often it’s tag conflict. Example: you’re describing a “simple dress” but you also have tags that imply layered clothing.
    → Try removing conflicting clothing tags first, then reroll.

  • If your character looks right but the outfit keeps drifting
    → Use stronger outfit tags in the prompt (only the important ones).
    → Use negative prompt only for the specific unwanted parts (not a huge “anti-outfit list”).
    → Lower denoise (if you’re using Hires. fix) so the upscale pass doesn’t repaint the clothes too much.


📌 Copy-Paste “Generation Info”

Checkpoint: novaAnimeXL (newer version)
Sampler: Euler a
Scheduler: Automatic
Steps: 20 (25–30 for landscape/action scenes)
CFG: 5 (raise gradually up to 7 if forcing)
Resolution: 768×1344 (landscape: 1344×768)
Hires. fix: R-ESRGAN 4x+ Anime6B, 1.5×, Hires steps 15, Denoise 0.3, same sampler/scheduler

17