Updated: May 11, 2026
characterThis workflow is designed for Anima Preview2 tiled upscaling and detail enhancement. Its main purpose is to take an existing image, enlarge it to a higher usable resolution, then refine the enlarged result tile by tile with Anima Preview2 so the final image looks cleaner, sharper, and more detailed without relying on simple interpolation alone. It is especially useful for creators who already have a good base image but want a higher-quality final version for publishing, preview display, cover images, or showcase output.
The workflow uses anima-preview2.safetensors as the main refinement model, qwen_3_06b_base.safetensors as the text encoder, and qwen_image_vae.safetensors as the VAE. It also includes a LoRA loader in the graph, showing that the upscaling route can be combined with an additional style or detail bias when needed. The overall design is not just “make the image bigger.” Instead, it builds a multi-stage pipeline: upscale first, split into tiles, describe the tiles, refine them in latent space, decode them, and then reassemble the final image.
The process starts with a source image loaded into the workflow. That image is first enlarged through a traditional upscale model using 4x_NMKD-Siax_200k. After that, the result is further normalized with ImageScaleToTotalPixels, targeting a larger working size while keeping the image manageable. This gives the workflow a stronger high-resolution base before diffusion refinement begins.
A major feature of this workflow is tiled processing. The enlarged image is divided into tiles with TTP_Image_Tile_Batch, and the tile layout is controlled through TTP_Tile_image_size. This is important because very large images can be difficult to refine in a single pass, especially when you want detail recovery without destroying the whole composition. By splitting the image into tiles, the workflow can enhance local detail more effectively while still reconstructing the whole image afterward through TTP_Image_Assy.
Another useful feature is automatic tile captioning. The workflow uses WD14Tagger to analyze the image tiles and generate prompt-like tag information. Those generated tags are passed through ShowText and then into the positive CLIPTextEncode route. This means the workflow does not depend entirely on a manually written prompt. Instead, it can derive a descriptive prompt from the source content itself, which helps the model understand what is already present in the image and refine it more consistently.
The negative prompt is focused on suppressing common defects such as low quality, blur, bad anatomy, hand errors, extra fingers, deformed faces, text, watermark, logo, and JPEG artifacts. The refinement stage then uses VAEEncode, KSampler, and VAEDecodeTiled. The KSampler is configured as a relatively gentle enhancement pass, with moderate steps, low CFG, and low denoise. That is important because the goal is not to redraw the whole image from scratch. The goal is to preserve the original composition and identity while recovering texture, edges, and finer details.
This workflow is ideal for anime illustrations, character art, cover images, social media assets, Civitai previews, and RunningHub publishing materials. In short, it is a practical high-resolution finishing workflow built around Anima Preview2, tile splitting, automatic tag assistance, and tiled latent refinement. If you want to see how the tile logic, tagger, upscale model, and final reconstruction work together, make sure to watch the full video tutorial from the YouTube link above.
⚙️ Try the Workflow Online
👉 Workflow: https://www.runninghub.ai/post/2033542982825152514/?inviteCode=rh-v1111
Open the link above to run the workflow directly online and view the generation results in real time.
If the results meet your expectations, you can also deploy it locally for further customization.
🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1Q1w1zKEwk/
I will continue updating model resources on Quark Drive:
👉 https://pan.quark.cn/s/20c6f6f8d87b
These resources are mainly prepared for local users, making creation and learning more convenient.
⚙️ 在线体验工作流
👉 工作流: https://www.runninghub.ai/post/2033542982825152514/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1Q1w1zKEwk/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。

