Sign In

⚙️ [UPDATED v2] How I Generate My Images (And Why I Use These Settings)

13

⚙️ [UPDATED v2] How I Generate My Images (And Why I Use These Settings)

How I Generate My Images (Updated 17-08-2025)

This article is for anyone curious about how I do my generations, why I use the settings I do, and how you can tweak them to work for your setup. Whether you're using the same LoRA or just browsing around, here’s a full breakdown of my generation method — including my specs, model choices, settings, and extra tools I use to survive this journey on a not-so-beefy rig. 😅

I actually made this article mainly for myself — to have a solid reference — but I also wanted to give others a better grasp of how I generate my images, rather than just dropping simple “generation info” under each post.


🖥️ My PC Specs (So You Understand Why I Do What I Do)

  • CPU: AMD Ryzen 5 5600X

  • GPU: GTX 1650 4GB VRAM

  • RAM: 16 GB

  • OS: Windows 10 Pro x64

I use Stable Diffusion WebUI Forge instead of A1111 — it handles VRAM better on my setup.

👉 Not much room to go crazy with 4K txt2img or big batch renders. My settings are built for stability and efficiency, with just enough VRAM to pull off quality gens using LoRAs and hires. fix.


🧪 Checkpoint & LoRA Stack

💡 Sometimes I remove the trigger word if the outfit turns messy or adds unnecessary details.


⚙️ My Generation Settings (Old Version)

~This section is here for comparison — these were my go-to defaults before 17-08-2025.

  • Sampling method: Euler a

  • Sampling steps: 25–30

  • CFG scale: 5–7

  • Base Resolution:

    • 512 × 768 – faster, stable for low VRAM

    • 1024 × 1024 – only when I’m away / background render

  • Hires. fix: ✅ Enabled

    • Denoising: 0.4

    • Hires steps: 20

    • Upscale by:

      • 2× if base is 512×768 → Final: 1024×1536

      • 1.5× if base is 1024×1024 → Final: 1536×1536

    • Upscaler:

      • Latent (WAI checkpoint)

      • R-ESRGAN 4x+ Anime6B (Nova / extra sharpness)

  • Seed: -1 (random unless I want variations)


UPDATE 10-08-2025 (Old Version)

🧠 Core Settings

  • Sampling method: DPM++ 2M (was Euler a before)

  • Schedule type: Karras

  • Sampling steps: 25–30

  • CFG Scale: 7 (still tweak between 5–7)

  • Seed: -1 (random each time — save if I want variations)

Hires. fix ✅ Enabled

  • Upscale by: 1.5

  • Hires steps: 15~20

  • Denoising strength: 0.5 (up from 0.4)

  • Hires sampling method: Same sampler (DPM++ 2M)

  • Hires schedule type: Same scheduler (Karras)

Upscaler Settings by Hardware

💻 My PC (GTX 1650)

  • Upscaler: Latent (nearest-exact)

  • Why: R-ESRGAN 4x+ Anime6B causes black images at the end on my hardware. Back when I ran Nova checkpoint, it still worked — but not anymore.

  • This Latent setup is tuned to mimic ESGRAN Anime’s look while avoiding the crash/black output issue.

🖥 Sister’s PC (High-end RTX)

  • Upscaler: R-ESRGAN 4x+ Anime6B

  • Handles heavy upscalers without problems, producing extra sharp and detailed results.

You can check my post to see the result that using this new setting.


Note: The following is my latest setup as of 17-08-2025. The strikethrough above shows my older defaults for comparison.


UPDATE 17-08-2025 – My Current Generation Setup

🧠 Core Settings

  • Sampling method: euler a (was DPM++ 2M before)

  • Schedule type: Automatic(was Karras before)

  • Sampling steps: 30

  • CFG Scale: 7 (still tweak between 5–7)

  • Seed: -1 (random each time — save if I want variations)


Base Resolution

  • 512 × 768 → My “fast” option for quick results. (~6 minutes per image on my PC).

  • 768 × 1024→ My “high-quality” option when I can step away and let it run (~15 minutes per image on my PC).

💡 I still follow the same rule — only use higher res when I’m not in a rush.


Hires. fix ✅ Enabled

  • Upscale by: 2(up from 1.5)

  • Hires steps: 15

  • Denoising strength: 0.4 (down from 0.5)

  • Hires sampling method: Same sampler (Euler a)

  • Hires schedule type: Same scheduler (Automatic)


Upscaler Settings by Hardware

💻 My PC (GTX 1650)

  • Upscaler: R-ESRGAN 4x+ Anime6B

  • Why: I'm coming back using my old setting with a little tweak. I just find this upscaler work best with my taste.

🖥 Sister’s PC (High-end RTX)

  • I'm not using her PC anymore. She's being petty and stingy.


Other Notes

  • Batch count / size: 1 × 1 unless testing variations.

  • Resolution choice depends on urgency — lower res for speed, higher res for detail.

  • I avoid fixed seeds unless doing controlled variations.

  • If you want to have landscape image, just swap the width and height.


🔌 Extensions I Recommend

  • 📓 Infinite Image Browser – Browse outputs and reapply prompt combos.


✍️ Prompt Tips (LoRA-Specific)

  • Too many unnecessary details (like frills)? Lower LoRA weight to 0.7–0.9 or remove the trigger.

  • CFG 5–7 gives a better detail balance. Higher values can force weird results.

  • Keyword stacking is fine, but don’t bloat prompts — quality > quantity.

  • You can let LoRA overpaint for surprise outfit variations (retry often).


💡 Why My Recent Results Look Better

I secretly used my sister’s high-end PC for ~10 days while she was away — RTX GPU, more RAM, better cooling.
You might’ve noticed:

  • Higher resolution outputs

  • Cleaner lighting/shading

  • More experimental shots

But… she’s back now 😅 so I’m back to my trusty 1650 setup.


📌 TL;DR

  • GTX 1650 + SD Forge → optimized for low VRAM survival.

  • 512x768 for speed, 768x1024 for detail (~17 min per image).

  • hires. fix always on, CFG 5–7, Latent (nearest-exact) upscaler on my PC.

  • R-ESRGAN 4x+ Anime6B is the way to go.

  • Keep LoRA weights light for cleaner results.

13