Step-by-Step Guide Series:
ComfyUI - PuLID Workflow
This article accompanies this workflow: link
Workflow description :
This workflow retrieves facial lines from a photo to create a new one, in a simple window.
Prerequisites :
đź“‚Files :
Clip_vision : sigclip_vision_patch14_384.safetensors
in ComfyUI/models/clip_vision
REDUX : flux1-redux-dev.safetensors
in ComfyUI/models/style_models
PuLID : pulid_flux_v0.9.0.safetensors
in ComfyUI\models\pulid
đź“‚Files for "base" version :
Model : flux1-dev-x.safetensors
in ComfyUI\models\unet
VAE : ae.safetensors
in \ComfyUI\models\vae
CLIP : t5xxl_fp8_e4m3fn.safetensors and clip_l.safetensors
in \ComfyUI\models\clip
đź“‚Files for GGUF version :
Recommendation :
24 gb Vram: Q8_0
16 gb Vram: Q5_K_S
<12 gb Vram: Q4_K_S
GGUF_Model : flux1-dev-QX_0.gguf
in ComfyUI\models\unet
GGUF_clip : t5-v1_1-xxl-encoder-QX_0.gguf
in \ComfyUI\models\clip
Text encoder : ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors
in \ComfyUI\models\clip
VAE : ae.safetensors
in \ComfyUI\models\vae
ANY upscale model :
Realistic : RealESRGAN_x4plus.pth
Anime : RealESRGAN_x4plus_anime_6B.pth
in models/upscale_models
📦Custom Nodes :
Don't forget to close the workflow and open it again once the nodes have been installed.
Usage :
In this new version of the workflow everything is organized by color:
Green is what you want to create, also called prompt,
Yellow is all the parameters to adjust the video,
Blue are the model files used by the workflow,
Purple is for LoRA.
We will now see how to use each node:
Write what you want in the “Prompt” node :
Select image format :
Choose the guidance level :
I recommend between 3.5 and 4.5. The lower the number, the freer you leave the model. The higher the number, the more the image will resemble what you “strictly” asked for.
Choose a scheduler and number of steps :
I recommend normal or beta and between 20 and 30. The higher the number, the better the quality, but the longer it takes to get an image.
Choose a sampler :
I recommend euler.
Define a seed or let comfy generate one:
Add how many LoRA you want to use, and define it :
If you dont know what is LoRA just dont active any.
Set the face strength to import :
The higher the number, the more your result resembles the imported face.
Import the image that contains the desired face:
Set the style strength to import :
Import the image that contains the desired style :
Choose your model:
Depending on whether you've chosen basic or gguf workflow, this setting changes. I personally use the gguf Q8_0 version.
Choose a FLUX clip encoder and a text encoder :
I personally use the GGUF Q8_0 encoder and the text encoder ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.
Select an upscaler : (optional)
I personally use RealESRGAN_x4plus.pth.
Now you're ready to create your image.
Just click on the “Queue” button to start:
Once rendering is complete, the image appears in the “image viewer” node.
If you have enabled upscaling, a slider will show the base image and the upscaled version.