santa hat
deerdeer nosedeer glow
Sign In

Using img2img effectively with Canny Controlnets



You keep seeing all these INCREDIBLE pictures with no descriptions, how do they do that?

Enter img2img with Canny Controlnets. Transform ANY image with astounding accuracy.

This guide assumes you are using AUTOMATIC1111.

What are Controlnets?

Think of control nets like the guide strings on a puppet; they help decide where the puppet (or data) should move. There are several controlnets available for stable diffusion, but this guide is only focusing on the "canny" control net. The official page for canny is available here.


  1. Install the v1.1 controlnet extension here under the "extensions" tab -> install from URL

  2. Download the control_v11p_sd15_canny.pth and control_v11p_sd15_canny.yaml files here.

  3. place the files in stable-diffusion-webui\models\ControlNet

  4. Reload the UI. After reloading, you should see a section for "controlnets"

Using Canny with Img2Img

  1. Select the "img2img" tab in AUTOMATIC1111

  2. Enter your prompt and negative prompt

  3. Select sampler and number of steps

  4. Put your source image in the img2img field (not the controlnet image field)

  5. Set width & height to same size as input image

  6. Controlnet settings

    1. Enable: Checked

    2. Guess Mode: Checked (only for pre 1.1)

      • if you are using version 1.1 ControlNet, use the "pixel perfect" checkbox to automatically set annotator resolution and canvas size

    3. Preprocessor: canny

    4. Model: control_canny-fp16

    5. annotator resolution: 768

    6. canvas width/height: same as source image

    7. high and low threshold: Default

      • these affect how sensitive the annotator is to gradient changes. Usually the default settings are acceptable, but poor lighting may require additional fine tuning. Make changes in increments of 10 on a single slider at a time until you are satisfied with the wireframe.

    8. Higher denoising values will apply the prompt more strongly (only for pre 1.1)

      • this has been simplified in 1.1 to buttons to emphasize Balanced/Prompt/Controlnet

  1. Generate!

How does this work?

Canny draws outlines around the shapes in the input image. This is connected with the prompt with the controlnet to affect the final diffusion image.

txt2img usage

This technique works similarly in txt2img by putting the image in the controlnet area, but retains less of the original image in the final result. A comparison of the same prompt/seed/sampler/checkpoint in Img2Img or Text2Img is below:



Next Steps

Learn how to use mov2mov to put everything together. Check out the guide here!