The reason for developing this workflow was to create the possibility of combining two decisive measuring points.
On the one hand, the level of detail in human representations (skin/details/realism) and the voluminous variety of flux-generated backgrounds.
Of course, this cannot be realized in a workflow that can promptly do this in one step.
Therefore, a multi-stage system had to be constructed.
Explanation of the workflow:
Step_1)
First we have the SDXL generation module. The basis for the desired motif is generated here. Two models, which can also be offset in the logic, as well as the clip data and the VAE.
The prompt is resolved and output via a multi-cycle sampler. In parallel, the generated data content can be processed via screen design modules.
In Cycle_mod we need to pay special attention to the usual settings. And here, too, it takes approx. 4-5 generations for the algorithm to convert the prompt faithfully.
Step_2)
The background data is generated in the second step. This is carried out via a flux workflow. Several loaders for models are available there for easy handling.
A prompt cascade is also available so that you can change quickly during the work process. This includes ready-made prompts for different backgrounds and themes.
Once a suitable background has been found, the flux can also be set to SEED mode: “fixed” and the BG data storage can be switched off to save working time during the actual motif generation.
Step_3)
Once the background has been created and saved, the image data is combined. To do this, the data stream runs through a complex mask processor. This will calculate and convert the free space for the background replacement before the subsequent SDXL cycle inserts the raw data from the first generation via a prompt repeat.
Once the process has been successfully completed, the desired motif should be saved.
Note:
It is very important to understand that the first SDXL cycle only creates the template. It makes little sense to write the prompt for backgrounds. The less information about backgrounds is available in the SDXL cycles, the more effectively the flux BG is integrated.
In first Run, load some placeholder-img into Upload-Nodes. Otherwise you have a Flaw-Situation.
For the second Run, you can choose the Img-Data, wich are fresh generated.
Secondary functions:
In this workflow, we also have the option of selecting ready-made backgrounds and motif elements via an upload and connecting them accordingly. There are many ways to create beautiful images.
Just try them out and try different variations. I have included simple tips for this in the workflow.
Conclusion:
This workflow is theoretically quite simple and still has a lot of potential for expansion. (Upscaler/Duo-Flux/Inpaint/Outpaint/LoRA-Merging....)
Unfortunately, however, it is also a performance hog and places immense demands on the hardware. I have not yet run it under Low_VRam/Low_RAM. However, from a purely mathematical point of view, taking exponentiality into account, all systems should run under 12GB VRam out of Memory.
If there are conflicts with the anything/everywhere transmitters - these are set to the color of the nodes. If necessary, connect manually if you are not yet familiar with this.
This workflow runs on my local DEV system under 126GB RAM / 4080 S (studiodriver).
Prompt executed in 140.20 seconds
NOTE: In my team we rename nodes/slots internally. In case of difficulties, the nodes can simply be replaced from the originals.
I am happy to answer questions, but it can always take a while as I usually work offline.
Happy Generation