Sign In

Prompt with Seed - An Image Development Step by Step

6

Prompt with Seed - An Image Development Step by Step

TL;DR

How I learned to develop the main subject of an image to a final quality result using a fixed (custom) seed – image step by image step. An example with images in 11 steps.

Introduction

I may not be a beginner any more - as I tell in my profile, but on some fields I still am. Take "Seed" for example. Until some days ago I just had some kind of "latent" knowledge of what seed is, then I watched a couple of YT videos and read some articles about this crucial part of the process of AI image generation.

It’s possible to generate impressive images without ever looking at the seed of them - you use random seed along the way to your final images. Which is more trial and error where you generate quite a bunch of images and pick out those which fit your initial idea best. You take several generation rounds refining and completing your prompt until your images are "perfect". I did exactly that in the last months and I hat a lot of fun doing that.

Now, having learned a bit more, my knowledge about seed has stepped up at least part of the stairway from "latent" to "stable" knowledge.

Let me mention two sources (discussions on Reddit) who made things more clear to me:

The Seed

The seed of an image is the initial value of the (pseudo) random noise from which the generation of the image starts - following the prompt. The same seed, the same prompt and exactly the same generation parameters (without the slightest deviation) will generate exactly the same image if you start generation again. Ideally, in reality there seems to be some randomness somewhere in the process that leads to tiny deviations in the resulting image. No identical twin is exactly identical.

Changes in the prompt and/or in generation parameters will have random side effects in the resulting image, so using fixed seed doesn’t give you 100 percent control over the result. We will see that in a moment. Some of those effects, I think, are called "bleeding".

Image Generation with a Seed

Say I want to generate a portrait of a brunette Girl with short hair. I start with a simple prompt to get an initial image from which to continue to build up and refine. I use CyberRealistic XL (a SDXL 1.0 checkpoint), which is easy to use with the online generator and offers some basic face types which are fun to start with (at least for me). The sampler is Euler a which delivers somewhat "polished" images.

1. See what we get first

establishing shot of a young woman, diagonal view, looking at the viewer, simple dark background
2025-09-05T14.39.01_1.jpg

OK, that’s a start... The seed is 1550152502 which I fix to custom in the "Advanced" settings section of the online editor. Without further descriptions the "establishing shot" could be a portrait. In the next step I define a sleeveless dress with a color, but I have an idea what the model (the AI) will do. If I describe a dress, the model will give me a dress...

2. The AI (the Model) reacts

establishing shot of a young woman, diagonal view, looking at the viewer, sleeveless blue dress, simple dark background
2025-09-05T14.43.22_1.jpg

It does. I will have to change the prompt to closeup portrait to get the girl back to the camera. I notice a subtle change in hair style. I think that is some of the variation called "bleeding" (correct me if I’m wrong). Elements in the prompt (tokens in the generation process) can affect other elements, e.g. color bleeding can really be annoying. But it’s OK here because I haven’t defined hair yet. But camera distance first...

3. Adjusting the Camera

closeup portrait of a young woman, diagonal view, looking at the viewer, sleeveless blue dress, simple dark background
2025-09-05T14.45.11_1.jpg

Now she is back in front of the camera. The blue changed a bit. Hair styling next.

4. Some Hair Styling

closeup portrait of a young woman, diagonal view, looking at the viewer, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background
2025-09-05T14.48.56_1.jpg


Short and chestnut, that’s what I want for this exercise. And it has an effect on the face, which for me is welcome. Please ignore the dress, at least for this phase of generation. Let’s concentrate on the face. The poor girl doesn’t have truly distinct eyes yet.

5. Defining Eyes

closeup portrait of a young woman, diagonal view, looking at the viewer, brown eyes, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background
2025-09-05T14.56.54_1.jpg


My experience over the months is: definition of eyes in the prompt can affect all parts of the face including the hair. And: the effects are mostly positive. We can play with that.

For now I think we have a simple, but OK portrait of this unknown young woman. We are on the way I wanted to go, but there’s refinement to bring in. Take the eyes, can we get more details, a bit more beauty please? We can...

6. Get some Detail Quality

closeup portrait of a young woman, diagonal view, looking at the viewer, perfectly detailed brown eyes, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background
2025-09-05T15.01.31_1.jpg


That’s it. The quality description in the prompt leads to some more precision in the drawing of the eyes. Now what about the skin? I’ll try...

7. More Detail Quality

closeup portrait of a young woman, diagonal view, looking at the viewer, perfectly detailed brown eyes, detailed realistic skin structure, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background
2025-09-05T15.04.26_1.jpg

A small refinement in the prompt, a slight effect in more skin structure and a subtle change in the illusion of depth, I think. And... nice effect: I lay more importance on skin in the prompt and the model brushes back the girl’s hair a bit to show more facial skin. Fine. (Is this AI??)

Now, can we get some subtle light please?

8. Defining a Light Setting

closeup portrait of a young woman, diagonal view, looking at the viewer, perfectly detailed brown eyes, detailed realistic skin structure, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background, moody lighting from the right side
2025-09-05T15.06.53_1.jpg

OK, this should be sufficient for this little exercise. A bit side lighting for a bit more depth and structure. If you wish, you could play around with cinematic, chiaroscuro, dramatic lighting for more effect.

Let’s try some generalized overall quality descriptions...

9. General Quality Definitions

masterpiece, best quality, closeup portrait of a young woman, diagonal view, looking at the viewer, perfectly detailed brown eyes, detailed realistic skin structure, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background, moody lighting from the right side
2025-09-05T15.08.26_1.jpg

Masterpiece and best quality set as starting tokens of the prompt. Do they really have a remarkable effect? Seems to me that we get more refined lighting, more natural light and shadow. That’s fine.

Let us leave the prompt at this point and dive in the possible effects of the number of sampling steps. I started with an amount of 25 steps as I mostly do. Let’s take the model 35 steps.

10. Tuning the Sampler

masterpiece, best quality, closeup portrait of a young woman, diagonal view, looking at the viewer, perfectly detailed brown eyes, detailed realistic skin structure, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background, moody lighting from the right side
2025-09-05T15.13.32_1.jpg

OK, that really has an impact on the whole image. Did we get more depth and plasticity? Try some more steps? 40 steps...

11. Some more Tuning

masterpiece, best quality, closeup portrait of a young woman, diagonal view, looking at the viewer, perfectly detailed brown eyes, detailed realistic skin structure, short chestnut hair, hair parted left, sleeveless blue dress, simple dark background, moody lighting from the right side
2025-09-05T15.17.32_1.jpg


Yes, we get more refinement. As I have read in many articles regarding the generation process, the amount of sampling steps needed to get an image with fine details and structure heavily depends on the model we use. There are some models needing only around 20 to 30 steps, others can deliver more and more details and refinement with up to 100+ steps. We have to try out.

Where we are now

With image 11 we have reached a result where I think the point I wanted to clarify has become clear:

Starting image generation with a fixed seed and then building up content and quality step by step gives us quite a lot of control over the generation process. At each step we cant test and experiment with elements in our prompt to see where variation and refinements leads us. If it doesn’t work out we can go back this step and start another trial. Our basic image remains the same.

And by the way: We know the effect of every token in our prompt.

Where could we go from here?

Let us stop here, for now. Some effects have been very subtle (on my tablet I have to look close-up to recognize some of them, but on my PC monitor they were clearly recognizable). But we can play with strength or weight of tokens at every step to get more recognizable effects.

At least I have learned quite a bit about the use of a fixed seed to get some more step-by-step control over the generation process of an image. Maybe some of YOU have, too. This would be very pleasing.

Try for yourself if you like, have very much fun generating and creating, as I will have. In your generations, change clothes, surrounding, position, pose, camera angle, attitude… whatever. So long as the seed remains the same, every image is an entry point to a new branch of images you can generate – with the same main subject - and come back to any image if you wish to start over again. So finally...

Thanks a lot for reading (commenting, criticizing...).

Maybe, in some days, if you like to visit my profile, you may meet this unknown young woman again. Could be with a bikini or a bath robe or less...

2025-09-05T15.53.39_1.jpg


Final remark: Please pardon my French - I'm not a native speaker/writer of English, so...

All images generated online by Cronokidiffusion466 using the CivitAI Online Generator.

Copyright © 2025 Cronokidiffusion466

6