Please read all before asking!
The Workflow
This workflow, by default, will take a 1mp image and edit it with another 1mp image (max 1024x1024px), then it will upscale it to twice the size (max 2048x2048). Unlike most of my workflows, this workflow uses custom nodes that are not regularly used, like Qwen Edit Utils, and LayerStyle, along with the GGUF node, but I always use GGUF models, so nothing uncommon there.
How this works
Upload to "Main image" the image you want to affect. This will be your main image and the workflow will use its size as latent size; however, it will always be 1MP, meaning that the longest size will always be 1024. For this example, it's 1024x1024 by default.
Upload to Image 2 the secondary image, the one you want to use to affect the main image. Again, I recommend using 1mp (1024px).
In the positive prompt (green), use your positive prompts, and in the negative (red), your negative prompts. There is a node called "Qwen instructions" with a text to help Qwen understand your images. I recommend not changing it.
The lightning LoRA is in brown; you might want to disable it if you want better quality, but you will need to add more steps to the KSampler, too.
The image will go to another step, which will upscale your output to the largest scale up to 2048px (twice the size of the first output). You will see a Qwen Upscale LoRA and a Text Encoder with the text "Upscale this picture to 4K resolution", don't change it or the LoRA won't work.
There are more notes inside the workflow itself.
Resources
Text Encoders
Diffusion Models
VAE
Custom Nodes
LoRAs
Psssttt...
😊 Please donate some Buzz if you can, so I can make LoRAs! I've been planning on creating more Chroma LoRAs and Z-Image-base LoRAs as soon as Civitai can create them.

