santa hat
deerdeer nosedeer glow
Sign In

AloeVera's - Instant-LoRA [No Training] - 1.5 [Read new info]


IMPORTANT !!! In the image down below ⬇️

Is a comparison that shows WHY it's important to use the standard IPAdapter ComfyUI to have the same results as me. (Left is the right one, right is the one that doesn't work for this specific use that I'm describing in this guide.)

In the attachments of this page, you can find the .json if you wanna make the comparison yourself!

0.Update log - V2

  • Added a node that allows loading your images directly from a directory, with a set image cap! Much easier to load your image. Thanks to the repo owner of ComfyManager, add this to your custom nodes: InspirePack

  • Added V2 workflow to the attachments of this Article.

  • Added HiresFix workflow for the 1.5 Instant_Lora (In attachements)

  • Added SDXL workflow (In attachements) (Experimental)

  • (If your directory has hundreds of images, it's probably not a good call. Trim it down to a solid selection of 6 to 12 images of your concept/subject first!)

  • Upscaler I use:

1.Special thanks and mentions:

First off, huge thanks to the following for making this possible: IPAdapter, ClipVision from Open AI, and the UI that allows the experiment to come to life, ComfyUI !

My workflow was the result of a little experiment which happened to blow my socks away. I'll explain further... [In progress..]


Here's what you need to download:

Then, you'll need two more models and you'll be set!:

Make sure you pick "ip-adapter-plus_sd15.bin" !!!! The other ones DON'T have the same effect AT ALL.

CHECK the attachments, for the workflow files to load in ComfyUI !

Also, check if your ComfyUI is up to date!

3.How to use it, Once you're ready !

All you have to do is, load the images of your choice, and have fun.

In the attachments, you can either pick the imgdrop version, or the img from path.
IMG drop, lets you drop images on the go. Whilst img path will require you to prepare a little folder and type the path in the box to load the images !

  • Make sure the images you load are varied enough

  • Make sure the images you load, are clear, and not super busy visually

  • Make sure to replace the node's value in "Load vae" with your own vae

  • Make sure to replace the checkpoint with your own! to not get "error missing. . ."


Basically, the results, now won't rely on your training capabilities, but on the following factors:

  • Image selection

  • Image clarity, quality

  • Settings

  • base model

  • prompt engineering

  • You do not need a token word, just call your subject what it is, a woman or man, whatever. :)

  • You do not need to waste time training and failing. Just replace the images, and adjust the values, until your results satisfy you.

And that's pretty much it for now. I'll keep adding things up, or adapting the workflow with other stuff along the way.

But I felt very eager to show this discover that just absolutely blew my mind out of the park. It's.. quite insane tbh. Hope you'll have fun with this !

If you want to support me: Kofi