Updated: May 9, 2026
characterThis ComfyUI workflow is designed for consistent pose control, lighting transfer, and Qwen Image Edit 2511 image reconstruction using VNCCS Pose Studio. The main purpose of this workflow is to let creators control a character’s pose, camera framing, and lighting direction more precisely, then use Qwen Image Edit 2511 to redraw the target image while preserving character identity and improving visual consistency.
The workflow combines three important ideas: pose control, lighting control, and image-edit reconstruction. Instead of relying only on a text prompt such as “make the character pose like this” or “add cinematic lighting,” this workflow uses VNCCS Pose Studio to create a controlled pose and lighting reference, then sends that result into Qwen 2511 as visual guidance. This gives creators a more controllable way to adjust body posture, camera perspective, and light direction.
The workflow is built around Qwen Image Edit 2511, using qwen_image_edit_2511_bf16.safetensors as the main editing model, Qwen-Image-Edit-2511-Lightning-4steps-V1.0-bf16.safetensors as the fast generation LoRA, VNCCS_PoseStudioQIE2511_V2.safetensors as the specialized pose and lighting control LoRA, qwen_2.5_vl_7b_fp8_scaled.safetensors as the Qwen vision-language text encoder, and qwen_image_vae.safetensors as the VAE. This combination makes the workflow suitable for fast controlled image editing, pose-guided character reconstruction, and lighting-aware generation.
The core workflow starts with a source image. The source image provides the character identity, appearance, clothing direction, and general visual style. VNCCS Pose Studio then creates a controlled pose and lighting setup. The Pose Studio section can output rendered pose reference images and a lighting prompt. This is useful because the workflow does not only describe a pose in words; it gives the model a visual structure to follow.
VNCCS Pose Studio includes body-shape, camera, pose, and light controls. The internal settings include mesh attributes, camera zoom, camera offset, model rotation, bone rotations, and light sources. This allows users to define the body posture, viewing angle, camera distance, light placement, and highlight direction before Qwen 2511 performs the final image edit. In practical use, this gives the workflow a “virtual pose studio” behavior: first build a rough controlled stage, then let Qwen 2511 redraw the final image.
The lighting control is especially important. Many image editing workflows can change pose but fail to keep light direction believable. This workflow generates a lighting prompt from the Pose Studio setup, then combines it with the user prompt through the string concatenation section. This means the final Qwen edit prompt can include both the intended character instruction and the lighting information generated by the pose studio. The result is more useful for cinematic portrait editing, fashion character posing, consistent AI model shots, and controlled image reconstruction.
The workflow also includes DWPreprocessor. This node can detect body, hand, and face pose information from the input image. It uses detection models such as YOLOX and DWPose to generate a pose reference. This makes the workflow useful for pose extraction and pose-aware editing. Users can analyze the original image’s posture, then use VNCCS Pose Studio and Qwen 2511 to create a more controlled pose transformation.
The image scaling section uses image_scale_pixel_v2. This prepares the input image and keeps it aligned for Qwen Image Edit processing. Resolution and alignment matter because pose control workflows can break if the image is too small, too distorted, or misaligned. The workflow uses a controlled total-pixel setting and alignment behavior to keep the Qwen editing route stable.
The prompt section uses TextEncodeQwenImageEditPlus. This node receives the Qwen text encoder, VAE, input image, optional reference images, and the final prompt. In this workflow, the prompt can be built from the user instruction and the generated lighting prompt. This allows Qwen 2511 to understand both the original image and the intended pose or lighting transformation.
The workflow uses FluxKontextMultiReferenceLatentMethod with index_timestep_zero. This helps manage the reference latent behavior for Qwen Image Edit. In practical terms, it supports stronger reference-image conditioning and helps the model understand how the input image and pose reference should guide the final generation. This is important because the workflow is not pure text-to-image; it is image edit plus reference-guided reconstruction.
The model route uses ModelSamplingAuraFlow, CFGNorm, and KSampler. ModelSamplingAuraFlow adjusts the sampling behavior for the Qwen image editing model. CFGNorm helps stabilize guidance and reduce overcooking. The KSampler section uses the Lightning route with 4 steps, CFG 1, Euler sampler, simple scheduler, and full denoise. This makes the workflow fast enough for repeated pose tests and practical online use.
Because this workflow uses a Lightning 4-step route, it is useful for iteration. Pose and lighting workflows often require several tests before the best result appears. Users may need to adjust the pose, camera zoom, body angle, light position, or prompt. A fast 4-step setup makes this easier than a heavy 40-step workflow.
This workflow is especially useful for consistent character image production. For example, a creator can keep the same character identity and clothing style, then generate different poses under controlled lighting. It can be used for model photo sets, character design sheets, fashion pose testing, cosplay-style renders, AI influencer images, game character previews, social media cover creation, product model display, and visual storytelling.
It is also useful for light-and-shadow experiments. A user can create a side light, front light, top light, or dramatic studio light setup in Pose Studio, then ask Qwen 2511 to redraw the character under that lighting. This is more controllable than only writing “cinematic lighting” in the prompt. The workflow gives the model a structured lighting guide.
Main features:
- Qwen Image Edit 2511 pose and lighting control workflow
- VNCCS Pose Studio integration
- Uses VNCCS_PoseStudioQIE2511_V2 LoRA
- Uses Qwen-Image-Edit-2511 Lightning 4-step LoRA
- Source image to controlled pose reconstruction
- Pose Studio body, camera, and lighting control
- DWPreprocessor pose detection support
- Lighting prompt generation and prompt concatenation
- Qwen 2.5 VL 7B FP8 text encoder support
- Qwen Image VAE support
- TextEncodeQwenImageEditPlus multi-image conditioning
- FluxKontextMultiReferenceLatentMethod reference latent control
- CFGNorm and ModelSamplingAuraFlow stabilization
- Fast 4-step KSampler route
- Suitable for consistent character posing, lighting transfer, and image edit reconstruction
Recommended use cases:
Pose control, character pose editing, lighting control, AI model pose generation, consistent character image sets, fashion pose testing, AI influencer image production, character design previews, social media cover images, cinematic portrait reconstruction, studio lighting transfer, pose-guided image editing, DWPose reference testing, Qwen 2511 controlled editing, RunningHub workflow publishing, and Civitai showcase images.
Suggested workflow:
Start by preparing a clear source image. The character should be visible, with readable body structure and enough detail for Qwen 2511 to preserve the identity. A clean portrait, half-body shot, full-body image, or character render works best. Avoid heavily blurred images, extreme occlusion, or images where the pose is impossible to read.
Use VNCCS Pose Studio to define the new pose and camera. Adjust the body pose, camera zoom, camera offset, model rotation, and framing until the reference pose matches your target composition. If you want a close-up image, keep the camera closer and avoid extreme full-body deformation. If you want a full-body pose, give the character enough canvas space.
Adjust lighting in Pose Studio. The workflow can use directional lights and light color settings to build a lighting reference. For cinematic portraits, use one strong key light and one softer fill light. For fashion images, keep lighting clean and flattering. For dramatic images, use stronger side light or rim light. The lighting prompt generated from this stage helps Qwen 2511 understand the intended light direction.
Use the DWPreprocessor if you want to extract pose information from an existing image. This is useful when you want to reference a real pose or reuse the structure of another image. Make sure body, hand, and face detection are enabled when needed. Pose detection quality strongly affects the final edit.
Write a direct prompt. The prompt should describe the subject, pose, camera framing, clothing, style, lighting, and what should remain consistent. For example: “Redraw the character from the input image using the pose and lighting reference, preserve the same identity, facial features, outfit style, and overall character design, with clean studio lighting and realistic body proportions.”
Use preservation rules. Since this workflow aims for consistency, tell the model what should not change. Mention that the face, identity, outfit theme, body proportions, hairstyle, and overall style should remain stable. If you want only pose and lighting changes, say that clearly.
Run the fast 4-step Lightning route for testing. Because the workflow is fast, test several poses and seeds. If the body posture is wrong, adjust the Pose Studio skeleton or DWPose reference. If lighting is weak, strengthen the lighting prompt or adjust the light setup. If the character identity drifts, simplify the prompt and strengthen preservation wording.
Check the final output carefully. Look at body proportion, hand placement, face identity, lighting direction, shadow consistency, clothing continuity, and camera framing. A good result should look like the same character was re-photographed or redrawn in a new pose and lighting setup, not like a completely different person.
For character sets, keep the same source image and style prompt, then change only pose and lighting controls. This helps produce a more consistent series. For fashion or AI influencer use, keep the face and outfit identity stable while changing pose and studio lighting. For Civitai examples, show the source image, pose control image, and final output together so users can understand the workflow logic.
This workflow is designed for creators who need more control than normal prompt-based image editing. By combining VNCCS Pose Studio, DWPose-style detection, Qwen Image Edit 2511, Lightning acceleration, reference-latent conditioning, and lighting prompt control, it provides a practical route for consistent pose and light editing inside ComfyUI.
🎥 YouTube Video Tutorial
Want to know what this workflow actually does and how to start fast?
This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.
Everything starts directly on RunningHub, so you can experience it in action first.
👉 YouTube Tutorial: https://youtu.be/iTMudoGSbBA
Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.
⚙️ RunningHub Workflow
Try the workflow online right now — no installation required.
👉 Workflow: https://www.runninghub.ai/post/2020409094468800513/?inviteCode=rh-v1111
If the results meet your expectations, you can later deploy it locally for customization.
🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1iTcwzJEtD/
☕ Support Me on Ko-fi
If you find my content helpful and want to support future creations, you can buy me a coffee ☕.
Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.
👉 Ko-fi: https://ko-fi.com/aiksk
💼 Business Contact
For collaboration or inquiries, please contact aiksk95 on WeChat.
🎥 YouTube 视频教程
想了解这个工作流到底是怎样的工具,以及如何快速启动?
视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。
我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。
👉 YouTube 教程: https://youtu.be/iTMudoGSbBA
开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。
⚙️ 在线体验工作流
现在就可以在线体验,无需安装。
👉 工作流: https://www.runninghub.ai/post/2020409094468800513/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1iTcwzJEtD/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。

