Electrum Cinnamon

A new year and a new model.
This is a quick rundown of tests I've done on my merge. Perhaps it will help seeing some of the bias that's backed into the model.
What is model bias?
Bias is what a model does when you give it almost no information in the prompt. An overfitted model will have stronger bias toward a particular gender, pose, or style. The more overfitted it is, the less flexible it becomes when generating an image from a prompt. We've all had those models that only want to output slim girls in cowboy shots.
Bias isn't always a bad thing, but it's a good idea to understand what the model's bias is so you're not trying to force a square peg into a round hole.
Electrum Cinnamon Bias:
This is my Electrum Cinnamon merge outputting an image with only cowboy_shot, white background and no negative. I don't have a negative so that the model has the freedom to show what it's bias is. This image also shows it likes to zoom in and crop the top of the head off. Not great but workable with prompting.
Positive: cowboy_shot, white background
Negative: N/A

CFG
CFG (that's the Classifier-Free Guidance scale) basically controls how tightly the model sticks to your prompt.
Low CFG (like 3–5): The model gets more creative and chill—images can look nicer and more natural, but it might wander off and ignore parts of what you wrote.
Higher CFG (7–10+): It listens harder to every word in your prompt, so you get exactly what you asked for... but push it too far and things can get overcooked—artifacts, weird contrasts, that plastic look.
For Electrum Cinnamon, I like keeping it between 5–7. Gives you good prompt control without turning everything into a crispy mess.
example of CFG test:

Sampler Test
This grid is a quick rundown of a sampler comparison I ran. Same model, same prompt, same seed, same settings; only the sampler changes. The goal here isn’t to crown a “best” sampler, but to make the differences visible and understand what each one is doing.
example of sampler test:

Each image here is generated from the same starting point. Any changes you’re seeing—pose shifts, anatomy emphasis, lighting differences, texture treatment—are coming from the sampler alone.
A few patterns become obvious:
Some samplers subtly re-interpret the pose, even when nothing in the prompt suggests movement.
Others lock anatomy down hard but sacrifice micro-detail.
A few push contrast and saturation, making the image feel sharper or more stylized.
Others smooth everything out and lean toward a painted or cinematic look.
This is why a workflow that works perfectly with one sampler can fall apart when you swap it out without adjusting anything else.
which sampler do you use?
I tend to fall back on DPM++ 2M (Karras) and Euler Ancestral (Karras) most of the time. Not because they’re objectively better, but because they usually get me where I want to go with the least friction.
DPM++ 2M feels predictable enough that I can trust it for a base image. It generally holds pose and anatomy without doing anything surprising, which makes it easier to evaluate what the model itself is doing rather than fighting the sampler.
Euler Ancestral is something I reach for when I want a bit more texture or contrast, or when the image feels too flat coming out of a cleaner pass. Sometimes it helps, sometimes it overshoots. I do not treat it as a rule, just another option that tends to work often enough.
This test mostly reinforced that sampler choice has a bigger impact than it gets credit for. I am not trying to optimize everything. I just want to understand the tendencies well enough that I am not surprised by them later.
Gender and Age handling test
This next test looks at how the model responds to gender and age tokens when everything else is kept deliberately simple.
The prompt here is basically doing the bare minimum:cowboy_shot, white_background plus a single subject tag like 1girl, 1boy, 1man, or 1woman, with and without the mature modifier. No negative prompt, no style steering, no corrections. The idea is to give the model enough rope to show its habits.

What this is testing
This is less about realism and more about defaults.
When you strip the prompt down like this, you start seeing how the model internally interprets:
What “girl” versus “woman” means
Where it draws the line between boy and man
How strongly it associates age with body shape, face softness, or clothing
Whether “mature” actually changes age, or just adds secondary traits
In other words, this is the model answering questions you did not explicitly ask.
What stands out?
A few things become obvious when looking across the row:
“Girl” and “woman” are not just age changes. They come with different body proportions and presentation. The fact that their is very little diffrence between girl and women in the example shows that the model is biased towards younger girls. Adding mature did however change the body proportions and looks. So the model isn't that overfitted.
“Boy” and “man” tend to collapse toward a similar adult baseline unless pushed harder.
Adding “mature” often affects facial structure and posture more than it affects perceived age.
Clothing choices and silhouettes are doing a lot of work here, even though nothing about fashion was specified.
None of this is necessarily wrong. It is just the model showing you what it thinks these words usually imply.
Why this matters
If you do not understand how a model handles age and gender on its own, it is easy to accidentally steer into something you did not intend. You might think you are asking for a simple demographic change, but the model may also be changing pose, proportions, or tone at the same time.
This test is mainly about awareness. Once you know where the model’s defaults are, it becomes easier to decide when you need to be explicit and when you can let it do its thing.
Gender and age handling with a negative prompt
This is the same setup as the previous test, but with a negative prompt added.worst quality, low quality, displeasing, text, , watermark, bad anatomy, text, artist name, signature, hearts, deformed hands, missing finger, shinny skin, child, children, loli, monochrome, (bad framing, poor framing),
The base prompt is still minimal:
cowboy_shot, white_background plus the subject token.
The only change is the negative list, which is doing basic cleanup and guardrails rather than style steering.
I am not trying to fix anything here. The point is to see how much the negative prompt alone shifts the model’s defaults.

What changes when a negative is introduced
The biggest difference is not quality so much as confidence.
With the negative in place, the model seems less willing to wander. Poses tighten up, framing becomes more consistent, and there is less accidental weirdness around hands and proportions. That part is expected.
What is more interesting is how the age and gender interpretation shifts:
The “girl” and “woman” outputs become more stylized and idealized.
The distinction between girl and woman is still weak with only a slight diffrence in breast and hip size.
Male outputs stay relatively stable, with fewer visual changes overall. Mostly body shape and facial hair to indicate age.
The mature modifiers affect polish and presentation more than age itself.
The negative prompt is not just removing errors. It is quietly reinforcing what the model thinks a “correct” output should look like.
An unintended side effect
Even with things like younger ages being explicitly blocked, the “girl” output still comes out looking adult. The model does not really sit in the middle or get uncertain. It just slides upward and settles on something safer.
So instead of ambiguity, you get a narrower interpretation. Everything looks cleaner and more controlled, but also more samey.
Negative prompts are usually treated like harmless cleanup, but they are not neutral.
Even in a simple setup like this, they push the model toward what it thinks is acceptable and away from anything uncertain.
It is not a problem. It is just something worth being aware of.
Style Test

This test is just here to give a rough idea of what the model can do with different styles.
Everything stays the same across the row except for the style tag. Same character, same framing, same scene. I am not trying to push accuracy or match a specific reference. The goal is simply to show the range the model is comfortable working in.
Each style token is treated at face value. No extra cleanup, no style-specific negatives, no effort to force consistency. What you are seeing is how the model naturally interprets these styles when you ask for them directly.
Some styles come through subtly, others more aggressively, but in most cases the character and composition stay readable. That is really the point of the test. It shows how far you can move stylistically without the image falling apart or turning into something unrecognizable.
This is not meant to be exhaustive or definitive. It is just a visual shorthand for the kind of stylistic flexibility the model has out of the box.
If you're interested you can try the model here
Example Images:





