โก TL;DR: My Workflow at a Glance
๐ I use ChatGPT with custom instructions to build structured, cinematic prompts.
๐ง I generate base images using tools like Sora, Recraft, Imagen, and Grok.
๐ I refine images using image-to-image passes in TensorArt with FLUX Dev fp32 or FLUX.1 Krea Dev.
๐ I sometimes start with SDXL models via CivitAI, then polish in TensorArt.
๐จ I use LoRAs and denoise strengths creatively to adjust pose, styling, or realism.
Hey everyone โ I often get DMs asking:
"What model are you using?"
"Which tools are in your pipeline?"
"How do you get that level of detail or style?"
Rather than replying each time, Iโve put together this overview of my AI image creation workflow. It breaks down the tools I use, how I chain them together, and the stylistic strategies I follow โ all in one place.
๐งฐ My Tool Set
๐ Prompt Generation Support
I use ChatGPT with a custom instruction setup to help structure and generate my prompts.
My custom instructions are designed to follow a cinematic, literal format โ focusing on realism, character coding, and worldbuilding.
I treat this like having a "visual director's assistant" โ it helps me consistently output high-quality prompts that translate well across different generators.
Inspired by techniques from Creating Photorealistic Images With AI: Using Stable Diffusion, I rely on a consistent prompt formula to reduce trial and error and ensure stylistic continuity across workflows.
๐ง Prompt-Based Generators
Sora
Recraft
Google Imagen
Grok
Other closed-source tools
๐ Post-Processing & Refinement
TensorArt (image-to-image passes using FLUX Dev fp32 or FLUX.1 Krea Dev)
CivitAI Generator (for SDXL-based model generations)
๐จ Modifiers
LoRAs (for styling, detail fixes, or expressive tweaks)
Prompt suffixes (for stylistic or implied character traits)
โ๏ธ My Core Workflows
Rather than doing the same thing every time, I switch between three main pipelines depending on the goal. Think of these like cinematic production styles โ each suited to different needs.
๐งช Closed-Source โก๏ธ Flux Finisher
๐น When I use this: Use this when I want convenience, speed, or a strong base composition from closed-source tools like ChatGPT or Imagen.
๐ ๏ธ How it flows:
Generate a base image using a web-based tool (ChatGPT, Sora, etc.)
Upload the result to TensorArt
Apply image-to-image using FLUX Dev fp32 or FLUX.1 Krea Dev
๐ง Denoise strength: typically 0.1โ0.4
๐ฏ Purpose: add detail, realistic lighting, and refined texture
๐ Midpoint Modifier Workflow
๐น When I use this: Use this when I want to alter the imageโs tone, styling, or silhouette before refining.
๐ ๏ธ How it flows:
Start with a web-based image (ChatGPT, etc.)
Pass it through CivitAI Generator with ~0.5 denoise
Optionally add LoRAs for style, structure, or enhancement
Upload to TensorArt
Run image-to-image with FLUX Dev fp32 or FLUX.1 Krea Dev
Use 0.1โ0.4 denoise to polish surface realism
๐ CivitAI โก๏ธ TensorArt Finalizer
๐น When I use this: Use this when I want to work entirely within SDXL or need full control over structure and stylization.
๐ ๏ธ How it flows:
Generate a base image using CivitAI Generator with an SDXL model
Optionally add LoRAs for specific styling or character coding
Export the image
Upload to TensorArt
Run image-to-image with Flux at 0.1โ0.4 denoise for refinement
๐ Pro Tip: SDXL images are often strong on their own, but a final Flux pass adds depth, tone, and cinematic realism.
๐ Final Thoughts
This is the core process behind most of my AI images. Over time, I've refined it to balance creative control with platform safety โ blending cinematic aesthetics, technical precision, and smart prompt structuring.
Whether Iโm:
generating from scratch using web-based tools,
remixing structure and tone in CivitAI,
or applying final polish in TensorArt with Flux,
the key is layered control โ knowing which part of the process to adjust.
If you're curious about any part of this workflow, feel free to bookmark or share this guide. Hope it helps!
๐ Note on Compatibility
One detail worth mentioning โ as of now, CivitAI doesn't seem to recognize TensorArt as an external generator. I suspect this is because TensorArt uses a ComfyUI backend, which may not align with how CivitAI tracks or validates generation sources. That said, this is a bit outside my technical wheelhouse โ so if you know more about this, Iโd love to hear your thoughts!
๐ฌ Questions? Drop them in the comments or DM me.
๐ Links & Resources
Here are some helpful tools, guides, or references I use or recommend: