Sign In

Redraw Paste-Back Qwen 2511 Local Inpainting Workflow

Updated: May 9, 2026

character

Download

1 variant available

Archive Other

10.58 KB

Verified:

Type

Workflows

Stats

38

Reviews

Published

May 9, 2026

Base Model

Qwen

Hash

AutoV2
CFBB218885
default creator card background decoration
AIKSK's Avatar

AIKSK

已按最新格式处理:不写 Title:,不写 Description:,英文标题放在整段文案最后。
Bilibili 链接已清理为公开播放链接:https://www.bilibili.com/video/BV1SK68BwE33/。RunningHub 已统一为 .ai 域名。
解析依据:你上传的工作流包含 Qwen Image Edit 2511、Lightning 4steps LoRA、Qwen 2.5 VL 文本编码器、Qwen Image VAE、Mask 局部区域、QwenEditConfigPreparer、TextEncodeQwenImageEditPlusCustom、QwenEditOutputExtractor、CropWithPadInfo、RestoreCropBox、Image Comparer、ImageReel 等关键节点,核心是“局部重绘 → 裁剪编辑 → 贴回原图 → 对比输出”。

This ComfyUI workflow is designed for Qwen Image Edit 2511 local inpainting, masked region editing, and redraw-and-paste-back image correction. The main purpose of this workflow is to let creators modify only a selected part of an image while preserving the original full-frame composition, camera angle, background, subject identity, and non-edited areas as much as possible.

Unlike a full-frame image editing workflow, this workflow focuses on local precision. It does not ask the model to reinterpret the entire image unnecessarily. Instead, it extracts or prepares the target area, sends the selected region into Qwen Image Edit 2511 for controlled repainting, then restores the edited result back into the original image position. This makes it useful for fixing specific problems inside an image without destroying the rest of the picture.

The workflow is built around Qwen Image Edit 2511. It uses qwen_image_edit_2511_bf16.safetensors as the main editing model, Qwen-Edit-2511-Lightning-4steps-V1.0-bf16.safetensors for faster generation, qwen_2.5_vl_7b_fp8_scaled.safetensors as the vision-language text encoder, and qwen_image_vae.safetensors as the VAE. This gives the workflow strong image understanding, instruction-following ability, and practical local editing speed.

The core workflow logic is “redraw first, then paste back.” In normal image editing, the model may change too much of the original image. Faces may drift, backgrounds may shift, lighting may change, or the whole image may become a new generation. This workflow is designed to reduce that problem by limiting the edit to a selected region. The model can focus on the masked area, while the final RestoreCropBox stage places the corrected result back into the original canvas.

This is especially useful when the original image is already good, but one part needs repair or replacement. For example, the workflow can be used to fix a hand, repair a face, change a clothing area, replace an object, adjust a small prop, correct a product detail, repaint part of the background, modify an accessory, or improve a broken AI-generated region. Instead of rerolling the whole image, you only repaint the problem area.

The workflow begins with image and mask preparation. A source image is loaded, and the target region is defined with a mask. The mask tells the workflow where editing should happen. Mask quality is extremely important. A mask that is too small may not give the model enough room to blend the new content naturally. A mask that is too large may affect areas that should remain unchanged. This workflow includes mask preview and mask processing tools so users can check and adjust the selected region before generation.

The workflow also includes MaskGrow and mask-to-image conversion tools. MaskGrow helps expand and soften the editing region. This is useful because hard mask edges often create visible seams after repainting. A slightly expanded and blurred mask usually gives the model more room to create natural transitions between the edited area and the untouched original image.

ImageResizeKJv2 is used to resize the image and mask into a controlled working resolution. This helps keep the workflow stable across different input image sizes. The included setup shows high-resolution processing logic, with the image prepared at a large working size and aligned to a divisible-by value. This is useful for preserving detail while still keeping the generation pipeline manageable.

QwenEditConfigPreparer is one of the core preparation nodes. It prepares the image, mask, reference configuration, visual-language input, crop mode, resize behavior, and longest-edge settings for Qwen Image Edit. This node helps define how the image is passed into the model and how the masked area is interpreted. In local editing workflows, this preparation stage is important because the model must understand both the original image and the target edit area.

The central text-conditioning node is TextEncodeQwenImageEditPlusCustom. This node receives the image edit configuration, Qwen text encoder, VAE, and the user instruction. The workflow’s instruction logic is designed to describe the key features of the input image, such as color, shape, size, texture, objects, and background, then explain how the user’s text instruction should alter the image. This is useful because Qwen Image Edit 2511 benefits from clear editing instructions rather than vague prompts.

The prompt should describe what needs to change and what must remain unchanged. For local inpainting, preservation language is very important. A good prompt does not only say “change this area.” It should also say “keep the original background, camera angle, lighting, face, pose, and non-masked regions unchanged.” This helps reduce unwanted global changes.

The workflow uses ConditioningZeroOut for negative conditioning management. It also uses ModelSamplingAuraFlow and CFGNorm to stabilize the model behavior. CFGNorm is useful in image editing because too much guidance can over-transform the image. In local editing, the goal is not maximum creativity, but controlled correction. The edit should be strong enough to solve the target issue, but conservative enough to preserve the original image.

The generation stage uses a KSampler setup with Qwen Image Edit 2511 and the Lightning 4-step LoRA. The included sampling route uses a short-step configuration, making it suitable for fast iteration. Local editing often requires multiple tests with different masks and prompts, so a faster sampling route is practical. Users can quickly test whether the masked region, prompt, and paste-back result are working before spending more time on final polishing.

After generation, QwenEditOutputExtractor extracts the generated image, mask, main image, reference data, and related outputs from the Qwen editing result. This makes the workflow more transparent. Users can inspect the main image, edited region, mask, and final result instead of only seeing a single output.

CropWithPadInfo is used to crop and manage the edited region with padding information. This is important because the model often works better when it receives a focused crop rather than the full original image. The crop gives the model more local detail and a clearer region to edit. The padding information records where that crop belongs in the original image, so the result can later be restored accurately.

The RestoreCropBox node is the key paste-back component. After the selected area has been edited, RestoreCropBox places the edited crop back onto the original full-size image. This allows the final output to preserve the original full-frame layout while replacing only the corrected area. This is the main advantage of the workflow: local editing without unnecessary full-image drift.

The workflow also includes optional object isolation and mask-related tools, including background removal and expansion logic. These can help prepare cleaner regions when the user wants to isolate a subject, prop, or object before local editing. This makes the workflow useful not only for simple repair, but also for controlled replacement and object-level local modification.

Image Comparer is included for before-and-after inspection. This is important because local editing must be judged carefully. A result may look good at first glance but still have alignment issues, visible seams, changed colors, broken edges, or unwanted identity drift. The comparison view helps users check whether the edited region actually blends into the original image.

ImageReel and ImageReelComposit are also included for presentation. They can show the main image, reference or intermediate image, mask, and result in a clear visual layout. This is useful for Civitai posts, RunningHub demonstrations, YouTube tutorials, and Bilibili showcases, because viewers can immediately understand the full editing logic instead of only seeing the final image.

Main features:

- Qwen Image Edit 2511 local inpainting workflow

- Redraw-and-paste-back image editing pipeline

- Mask-based local repainting

- Uses qwen_image_edit_2511_bf16.safetensors

- Uses Qwen-Edit-2511 Lightning 4-step LoRA

- Qwen 2.5 VL 7B FP8 text encoder support

- Qwen Image VAE support

- QwenEditConfigPreparer for edit configuration

- TextEncodeQwenImageEditPlusCustom for instruction-based editing

- QwenEditOutputExtractor for output management

- MaskGrow for mask expansion and edge control

- CropWithPadInfo for local crop editing

- RestoreCropBox for pasting the edited region back into the original image

- CFGNorm for more stable edit guidance

- Image Comparer for before-and-after checking

- ImageReel layout for demonstration and publishing

- Suitable for local repair, object replacement, clothing correction, face/hand fixes, and AI image post-production

Recommended use cases:

Local inpainting, masked region repainting, face correction, hand repair, clothing replacement, object replacement, accessory editing, product detail correction, background region repair, character detail modification, AI image post-production, social media cover correction, product mockup editing, commercial visual cleanup, Civitai workflow demonstration, RunningHub online workflow publishing, and before/after image editing showcases.

Suggested workflow:

Start by loading the main image. Choose an image where the overall composition is already acceptable and only one part needs to be edited. This workflow is most useful when the goal is targeted correction rather than full regeneration.

Prepare the mask carefully. The mask should cover the exact area you want to repaint, but it should also include a little extra space around the target region for natural blending. For hand repair, cover the full hand and a small part of the wrist. For clothing edits, cover the full garment area and nearby edges. For object replacement, cover the object and some surrounding contact area.

Use the mask preview to check the selected area before running the model. If the mask is too tight, expand it with MaskGrow. If the edge looks too hard, increase blur or soften the mask. Clean mask preparation is often more important than writing a long prompt.

Write a direct edit instruction. A good prompt should clearly describe what should happen inside the masked area. For example: “repair the hand with natural fingers,” “replace the damaged object with a clean black leather handbag,” “change only the shirt into a white linen shirt,” or “repaint the broken background area with matching wall texture.”

Add preservation rules to the prompt. For example: “Keep the original face, pose, background, lighting, camera angle, body shape, and all non-masked regions unchanged.” This kind of instruction helps the model understand that the task is local correction, not full-image redesign.

Run the Qwen Image Edit 2511 generation stage. The Lightning 4-step route is useful for fast testing, so you can quickly compare multiple prompt and mask versions. If the result is too weak, make the prompt more direct or expand the mask. If the result changes too much, simplify the prompt or reduce the edited region.

After generation, check the cropped result first. Make sure the local edit itself is correct before judging the full image. Look for correct shape, believable lighting, texture consistency, and whether the object or repaired area matches the original scene.

Then check the paste-back result. The final image should preserve the original layout while replacing only the target region. Look carefully at the seam between the edited area and the original image. If the seam is visible, adjust mask blur, padding, or crop size.

Use Image Comparer to inspect the before-and-after output. A successful local inpainting result should fix the selected region while keeping the original image identity stable. If the face, background, lighting, or camera angle changed too much, the edit is not controlled enough.

Use ImageReel for public demonstrations. Showing the main image, mask, and final result makes the workflow easier to understand. This is useful for Civitai posts, RunningHub workflow pages, YouTube thumbnails, and Bilibili tutorial examples.

For production use, test several versions. Local editing is usually an iterative process. The best result often comes from adjusting the mask, prompt, crop padding, and seed rather than relying on one generation. Keep the best version and compare it with the original before final export.

This workflow is designed for creators who need practical local image correction inside ComfyUI. It is not only a Qwen 2511 editing demo; it is a complete local repainting pipeline with mask preparation, instruction-based editing, crop processing, paste-back restoration, and visual comparison. It is especially useful when you want to repair or replace one part of an image without damaging the rest.

🎥 YouTube Video Tutorial

Want to know what this workflow actually does and how to start fast?
This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.
Everything starts directly on RunningHub, so you can experience it in action first.
👉 YouTube Tutorial: https://youtu.be/nlkrfEaScM0

Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.

⚙️ RunningHub Workflow

Try the workflow online right now — no installation required.
👉 Workflow: https://www.runninghub.ai/post/2018213964676603905/?inviteCode=rh-v1111

If the results meet your expectations, you can later deploy it locally for customization.

🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!

📺 Bilibili Updates (Mainland China & Asia-Pacific)

If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1SK68BwE33/

☕ Support Me on Ko-fi

If you find my content helpful and want to support future creations, you can buy me a coffee ☕.
Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.
👉 Ko-fi: https://ko-fi.com/aiksk

💼 Business Contact

For collaboration or inquiries, please contact aiksk95 on WeChat.

🎥 YouTube 视频教程

想了解这个工作流到底是怎样的工具,以及如何快速启动?
视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。
我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。
👉 YouTube 教程: https://youtu.be/nlkrfEaScM0

开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。

⚙️ 在线体验工作流

现在就可以在线体验,无需安装。
👉 工作流: https://www.runninghub.ai/post/2018213964676603905/?inviteCode=rh-v1111

打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。

🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

📺 Bilibili 更新(中国大陆及南亚太地区)

如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1SK68BwE33/

我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。