Step-by-Step Guide Series:
HiDream - INPAINT Workflow
This article accompanies this workflow: link
This guide is intended to be as simple as possible, and certain terms will be simplified.
Workflow description :
The aim of this workflow is to generate images from text in a simple window.
Prerequisites :
đź“‚Files :
24 gb Vram : base
16 gb Vram: Q8_0
12 gb Vram: Q5_K_S
<12 gb Vram: Q4_K_S
For base version
Model : hidream_i1_dev_fp8.safetensors
in ComfyUI\models\diffusion_models
For GGUF version
GGUF_Model : hidream-i1-dev-QX_K_S.gguf
in ComfyUI\models\unet
VAE : ae.safetensors
in ComfyUI\models\vae
CLIP : clip_l_hidream.safetensors, clip_g_hidream.safetensors, t5xxl_fp8_e4m3fn_scaled.safetensors and llama_3.1_8b_instruct_fp8_scaled.safetensors
in ComfyUI\models\clip
📦Custom Nodes :
Don't forget to close the workflow and open it again once the nodes have been installed.

Usage :

In this new version of the workflow everything is organized by color:
Green is what you want to create, also called prompt,
Red is what you dont want,
Yellow is all the parameters to adjust,
Pale-blue are feature activation nodes,
Blue are the model files used by the workflow,
Purple is for LoRA.
We will now see how to use each node:
Write what you want in the Positive node :

Write what you dont want in the Negative node :

Select image format :

Choose the guidance level :

I recommend starting at 1 for HiDream.. The lower the number, the freer you leave the model. The higher the number, the more the image will resemble what you “strictly” asked for.
Choose a scheduler and number of steps :

I recommend ddim_uniform and between 20 and 30. The higher the number, the better the quality, but the longer it takes to get an image.
The denoise parameter allows you to choose the level of influence your base image has on the new one.
To put it simply, 1 will give you a completely new image; 0 will give you exactly the same image as the original. I recommend starting around 0.8 and adjusting accordingly.
Choose a sampler :

I recommend euler.
Define a seed or let comfy generate one:

Choose if you want to increase the number of details :

It's advisable to start without this option, then once you have an image you like, keep the same seed and try to increase the detail.
Add how many LoRA you want to use, and define it :

If you dont know what is LoRA just dont active any.
Load your base image :

The new image will be exactly the same size as the original. So be careful not to make it too big, or the generation will be very slow.
Right-click on the image and select “Open in mask editor”.

Use your mouse to select the areas of the image to be modified. Then click on "Save".

Choose your model:

Depending on whether you've chosen basic or gguf workflow, this setting changes. I personally use the gguf Q8_0 version.
Select the 4 clip needed :

Activate and select an upscaler : (optional)

Now you're ready to create your image.
Just click on the “Queue” button to start:


Once rendering is complete, the image appears in the “image viewer” node.
If you have enabled upscaling, a slider will show the base image and the upscaled version.
This guide is now complete. If you have any questions or suggestions, don't hesitate to post a comment.
