Expanding the borders of an image within ComfyUI is straightforward, and you have a couple of options available: basic outpainting through native nodes or with the experimental ComfyUI-LaMA-Preprocessor custom node.
Workflows presented in this article are available to download from the Prompting Pixels site or in the sidebar.
If you want to watch a video of this process, I demonstrate the basic outpainting method in this video:
Note: While you can outpaint an image in ComfyUI, using Automatic1111 WebUI or Forge along with ControlNet (inpaint+lama), in my opinion, produces better results.
Basic Outpainting
The process for outpainting is similar in many ways to inpainting. For starters, you'll want to make sure that you use an inpainting model to outpaint an image as they are trained on partial image datasets. If an inpainting model doesn't exist, you can use any others that generate similar styles as the image you are looking to outpaint.
Dive Deeper: If you are still wondering why I am using an inpainting model and not a generative model, it's because in this process, the mask is added to the image making it a partial image.
1. Padding the Image
When outpainting in ComfyUI, you'll pass your source image through the Pad Image for Outpainting
node. This node can be found in the Add Node > Image > Pad Image for Outpainting
menu.
The node allows you to expand a photo in any direction along with specifying the amount of feathering to apply to the edge. Feathering only applies to the edges of the source image. Therefore, a high feathering value will result in the model overwriting some of the original image. However, this may help in reducing the appearance of a hard edge.
To demonstrate, here's what various feathering values look like:
The max feathering value, as defined here in the code, cannot exceed more than half the size of either the width or height of the image (whichever is shortest). So, for example, if your image is 512x768, then the max feathering value is 255.
2. Encoding the Image
Once all variables are set, the image is then passed through the VAE Encode (for Inpainting)
node. This node is found in the Add Node > Latent > Inpaint > VAE Encode (for Inpainting)
menu. This node takes the original image, VAE, and mask and produces a latent space representation of the image as an output that is then modified upon within the KSampler along with the positive and negative prompts.
The grow_mask_by
applies a small padding to the mask to provide better and more consistent results. A default value of 6 is suitable for most cases.
Growing the mask while outpainting may introduce harsh edges, as we can see in the side-by-side comparison below:
3. Prompts & KSampler
The positive and negative prompts will provide direction to what the model should generate.
Since it knows the general context of the image in the latent space, it will generate an image even if the term is completely unrelated to the image.
Instead, the prompts should be how you envision the image to be expanded. In this example, if I wanted to have the woman with blonde hair wearing glasses, then the prompts would change to reflect that.
Here are some general tips for other practical uses:
Expanding the Image: If you wanted to zoom out from the image, then you would use the original prompt that you used to generate the image. If this is a photograph, then just describe the scene in the prompt, and the model will generate the rest.
Applying Different Styles: Transforming the style of an image is difficult and may be better suited for image-to-image instead. However, if experimenting, set the max feathering value to the limit (as described above) and then set the prompt to what you want the outpainted area to look like.
The KSampler values will depend on the model you are using. Some will produce better results depending on the sampling method, steps, and CFG value. Be sure to review the model card by the original author.
4. Reviewing the Outputs
Using this method, here's where the image started:
Source: https://unsplash.com/photos/a-woman-in-a-green-sports-bra-top-and-leggings-6EKXMH0Y7QY
And with this padding of the image, it produced the rest:
Outpainting using ControlNet
The experimental custom node, ComfyUI-LaMA-Preprocessor, allows you to outpaint an image using a ControlNet model.
This follows the same process available within Automatic1111 WebUI - where, in my humble opinion, provides better results.
To use ComfyUI-LaMA-Preprocessor, you'll be following an image-to-image workflow and add in the following nodes: Load ControlNet Model
, Apply ControlNet
, and lamaPreprocessor
:
When setting the lamaPreprocessor
node, you'll decide whether you want horizontal or vertical expansion and then set the amount of pixels you want to expand the image by. Unfortunately, you cannot define the specific edge you want to expand the image from.
So, a vertical expansion of 400 pixels expands the top and bottom edge of the image by 200 pixels in either direction. The same applies for horizontal expansion.
Here's an example of an output using this method (same source image):
Note that the color of the outfit changed quite a bit from the source.
In testing, the results varied with some looking pretty good where others there was a noticeable seam in the image. Again, this is an experimental node and worth checking out.
Lastly, this requires an NVIDIA GPU.