Verified: 8 months ago
Other
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
This is an Advanced Img2Img workflow for ComfyUI based on google's rectified flow inversion. It allows you to change your image, transfer style, inpaint, face swap etc... only by using prompts. No manual inpainting and controlnets.
To simplify: imagine basic img2img workflow, where you send input image -> make random noise from it -> send it to sampler and get output image, but in that case you can control what kind of noise you are getting from input image, depending on your settings. For instance, you can change either, part of image(inpainting) or transfer image style(ip-adapter).
So it looks like this: input image -> unsample image to noise (with additional "textual attention mask" a.k.a prompt input for img2noise unsampler. It should work good enough even without prompt, but you can specify what part of image you want to get "masked" for denoising) -> noise from unsampler goes to the sampler and you get output image edited with unsampled noise.
In other words it's almost like basic img2img wflow, but with additional "unsampling" step that provides more controled noise. [input image->unsampler(with optional prompt)->sampler(with standart prompt)->output image]
I recommend to use Hyper and Turbo merged models or loras of Flux, so instead of 28*2 steps you can do 12*2 with similar quiality.
GitHub - logtd/ComfyUI-Fluxtapoz: Nodes for image juxtaposition for Flux in ComfyUI
Some stuff from youtube:
(1) Turbo-Powered Unsampler & Inpainting with Flux.1 in SECONDS with ComfyUI! - YouTube
(1) Google RF Inversion: Image Editing with Prompting! - YouTube
More instructions provided in the workflow itself.