Sign In

Z-Image-i2L (Image to LoRA) Fast LoRA Training + ControlNet Testing + Upscale Workflow

Updated: May 10, 2026

character

Download

1 variant available

Archive Other

11.16 KB

Verified:

Type

Workflows

Stats

47

Reviews

Published

May 10, 2026

Base Model

ZImageTurbo

Hash

AutoV2
A8D439B2F2
default creator card background decoration
AIKSK's Avatar

AIKSK

This workflow is an expanded Z-Image-i2L production pipeline that combines fast Image-to-LoRA generation, ControlNet structure testing, and high-resolution tiled upscaling into one complete ComfyUI graph. It is designed for creators who do not only want to generate a quick LoRA from reference images, but also want to immediately test that LoRA under real production conditions and then push the result into a more polished final output.

The first stage focuses on fast LoRA creation. Multiple reference images are loaded and combined into a training image batch, then passed into the RunningHub Z-Image-i2L system. This allows the workflow to generate a lightweight Z-Image LoRA from a small group of images without requiring a traditional local training setup, dataset folder preparation, caption files, or command-line configuration. It is especially useful for quickly capturing a character identity, fashion style, product look, object concept, creature design, or consistent visual aesthetic.

After the LoRA is generated, the workflow immediately saves it and loads it back into Z-Image Base for testing. This is the key advantage of the pipeline: training and validation happen in the same graph. The user can quickly see whether the generated LoRA actually affects the output, whether it preserves the target identity, whether it introduces artifacts, and whether the strength needs to be adjusted. This makes the workflow much more practical than a training-only setup.

The second stage adds ControlNet testing. A structure reference image is processed through DepthAnythingV2Preprocessor to create a depth map, then applied through Z-Image Fun ControlNet Union. This lets the newly generated LoRA be tested under controlled composition, depth, layout, and spatial guidance. A LoRA may look fine in a basic text-to-image test, but fail when the camera angle or scene structure becomes more demanding. This workflow helps reveal that immediately.

The generation section uses Z-Image Base with qwen_3_4b text encoding, AE VAE, ControlNet guidance, SplitSigmas, DetailDaemonSamplerNode, CFGGuider, and SamplerCustomAdvanced. This gives the workflow a more controlled two-stage sampling structure, where the early phase builds the main layout and the later phase refines the image. It is useful for evaluating prompt compatibility, LoRA strength, structural stability, and final image coherence.

The third stage is high-resolution enhancement. After the controlled LoRA test image is generated, the workflow sends the result into an upscale pipeline. It uses a traditional upscale model, then scales the image toward a target megapixel size. The image is split into tiles with TTP tile tools, each tile can be captioned with Florence2, refined through latent processing, decoded with tiled VAE decoding, and finally reconstructed into a complete high-resolution image. This makes the workflow suitable not only for LoRA testing, but also for producing sharper Civitai showcase images, RunningHub examples, thumbnails, and final publishing assets.

In short, this is not just an Image-to-LoRA workflow. It is a full LoRA creation, ControlNet validation, and upscale finishing pipeline. If you want to see how the full node structure works, how the generated LoRA is tested with ControlNet, and how the final upscale stage improves the output, watch the full video tutorial from the YouTube link above.

⚙️ Try the Workflow Online

👉 Workflow: https://www.runninghub.ai/post/2023308180264067074/?inviteCode=rh-v1111

Open the link above to run the workflow directly online and view the generation results in real time.

If the results meet your expectations, you can also deploy it locally for further customization.

🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!

📺 Bilibili Updates (Mainland China & Asia-Pacific)

If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.

📺 Bilibili Video: https://www.bilibili.com/video/BV1qXZMBwEC7/

I will continue updating model resources on Quark Drive:

👉 https://pan.quark.cn/s/20c6f6f8d87b

These resources are mainly prepared for local users, making creation and learning more convenient.

⚙️ 在线体验工作流

👉 工作流: https://www.runninghub.ai/post/2023308180264067074/?inviteCode=rh-v1111

打开上方链接即可直接运行该工作流,实时查看生成效果。

如果觉得效果理想,你也可以在本地进行自定义部署。

🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

📺 Bilibili 更新(中国大陆及南亚太地区)

如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

📺 B站视频: https://www.bilibili.com/video/BV1qXZMBwEC7/

我会在 夸克网盘 持续更新模型资源:

👉 https://pan.quark.cn/s/20c6f6f8d87b

这些资源主要面向本地用户,方便进行创作与学习。