Sign In

Anima Preview2 Image-to-Image Workflow

Updated: May 11, 2026

character

Download

1 variant available

Archive Other

2.94 KB

Verified:

Type

Workflows

Stats

27

Reviews

Published

May 11, 2026

Base Model

Anima

Hash

AutoV2
1B4C38381B
default creator card background decoration
AIKSK's Avatar

AIKSK

This workflow is designed for Anima Preview2 image-to-image generation, giving creators a clean and efficient way to transform an input image into a more polished anime-style result while still preserving the original composition and core visual structure. Unlike a pure text-to-image workflow, this setup begins with a source image, analyzes it, converts it into latent space, and then regenerates it through Anima Preview2 with prompt guidance. This makes it especially useful for style transfer, anime enhancement, visual cleanup, character refinement, and turning an existing image into a more cinematic anime illustration.

The workflow uses anima-preview2.safetensors as the main generation model, qwen_3_06b_base.safetensors as the text encoder, and qwen_image_vae.safetensors as the VAE. It starts by loading a source image, then rescales it with image_scale_pixel_v2 so the input is normalized to a model-friendly pixel target while keeping alignment stable. After that, the image is encoded into latent space with VAEEncode, which becomes the structural starting point for the generation stage. This design makes the workflow ideal for users who want to preserve the general framing, subject position, and image layout rather than generating from scratch.

One of the useful features in this workflow is the WD14 tagger support. The loaded image is automatically analyzed to produce tag-style prompt information, which can then be combined with the manual positive prompt. In the provided setup, the base prompt includes a beach-sunset singing-girl concept, showing how this workflow can mix image-derived tags with user-written prompt direction. This is practical for image-to-image generation because it reduces prompt-writing difficulty and helps the model better understand the existing content of the input image.

The negative prompt is focused on common quality issues, including low quality, blur, bad anatomy, bad hands, extra fingers, fused fingers, deformed faces, text, watermark, logo, and JPEG artifacts. This helps keep the result clean during regeneration, especially when the source image already contains imperfect details or when the user wants a sharper anime-style finish.

The main generation stage uses a KSampler with 40 steps, CFG around 3, DPM++ 2M SDE GPU sampling, SGM Uniform scheduling, and denoise around 0.75. This is important because the denoise value makes the workflow more flexible than a strict preservation pipeline. It allows the model to significantly re-render the image while still respecting the source structure. In practice, this means the workflow can keep the pose and composition from the original image, but improve the linework, lighting, facial beauty, atmosphere, and overall anime rendering quality.

After sampling, the result is decoded through the VAE, previewed, and saved directly. This makes the workflow suitable for anime restyling, character cleanup, visual enhancement, cover-image polishing, concept refinement, and rapid RunningHub or Civitai demonstrations. If you want to see how the source image, WD14 tagger, prompt guidance, and Anima Preview2 regeneration work together, make sure to watch the full video tutorial from the YouTube link above.

⚙️ Try the Workflow Online

👉 Workflow: https://www.runninghub.ai/post/2033542974574956546?inviteCode=rh-v1111

Open the link above to run the workflow directly online and view the generation results in real time.

If the results meet your expectations, you can also deploy it locally for further customization.

🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!

📺 Bilibili Updates (Mainland China & Asia-Pacific)

If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.

📺 Bilibili Video: https://www.bilibili.com/video/BV1Q1w1zKEwk/

I will continue updating model resources on Quark Drive:

👉 https://pan.quark.cn/s/20c6f6f8d87b

These resources are mainly prepared for local users, making creation and learning more convenient.

⚙️ 在线体验工作流

👉 工作流: https://www.runninghub.ai/post/2033542974574956546?inviteCode=rh-v1111

打开上方链接即可直接运行该工作流,实时查看生成效果。

如果觉得效果理想,你也可以在本地进行自定义部署。

🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

📺 Bilibili 更新(中国大陆及南亚太地区)

如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

📺 B站视频: https://www.bilibili.com/video/BV1Q1w1zKEwk/

我会在 夸克网盘 持续更新模型资源:

👉 https://pan.quark.cn/s/20c6f6f8d87b

这些资源主要面向本地用户,方便进行创作与学习。