Sign In

[workflow] i2i consistent character generation (FLUX)

[workflow] i2i consistent character generation (FLUX)

Generate Consistent Character Using Reference Image

This workflow will help you generate any number of character images with a high degree of consistency based on a reference image. You can choose to use In-Context_loRA, but in practice the results are awesome even if you don't use it.

Principle

Load your reference image, the workflow will expand it and inpaint the expand area. The key magic is the alimama inpaint controlnet model, as long as you use the prompt like "twins", "identical face" (but actually it need extra face swap node to keep faces look same), "identical hair", "identical dress", the results will be highly consistent. I shall call this way "Inpaint Twins".

The awesome part is that it can be used not only for human characters but also for fictional creatures just like catman, mini-dragon, and so on.

VRAM Requirement

As it works with Flux, much VRAM is needed. Good news is that you can use GGUF models to save VRAM, it seems 12G will work fine when params are set properly.

How to Use

I have written detail instructions in the workflow. Read all the blue notes when you open workflow for the first time.

Yes my workflow seems extremely complicated, but once you get good at using it, you'll find it really handy.

Contact Me

bilibili: https://space.bilibili.com/1821797411

QQ: 1953761458

Email:[email protected]

1

Comments