This article is for those among us who may very well be intimidated by the very idea of just setting up their own PC for local genning at home. After all, web generation can remove a lot of the headache. Heck, my original hesitance in even trying something like ComfyUI is the very reason I wanted to write this. It honestly was a lot easier to do than I first thought, but a little guidance goes a long way as I've had some help from a friend.
If you're at all familiar with Civit's web generator this should be fairly easy for most to get into. It's just a basic guide for getting ComfyUI set up so you can get to the fun stuff. The only big real requirement is to have at least a recent-ish midrange NVIDIA GPU; think 2060/3060/4060 with preferably 8GB VRAM or more.
Now with all that said, let's get started:
Installing ComfyUI is about as painless as one could imagine. All one needs to do is go to the ComfyUI repository and grab the standalone portable WebUI package. Just extract the folder from ComfyUI_windows_portable_nvidia.7z to the drive of your choice - preferably with plenty available space since base checkpoints and LoRA models will take up a significant chunk.
Also, make sure to run a quick update via the update_comfyui.bat file in the updates folder to ensure you're all caught up.
While you can find some very basic guides on ComfyUI's examples page and ComfyUI itself comes with the most basic of workflows, this can still leave those not accustomed to this environment a bit clueless. That is why I created some basic workflows for image generation and hi-res fix which more or less reflect what can be done via Civit's own web gen, minus face fix.
You can grab those workflows from my model page. No need for dozens of extra custom packages, nodes, or what have you (except for the upscaler). Just the basics so you can get started. Should you feel so inclined there is a plethora of advanced techniques, workflows, and custom nodes to be used, but that is very much the deep end of the pool.
You will have to drop these workflows in the shown folder, but you may have to start ComfyUI at least once before the folder is created.
You will also need to bring your own SD1.5/SDXL/Pony checkpoint and LoRA models, which should be very easy for you to acquire seeing as we're on Civitai already. Make sure to place checkpoints, LoRAs, and needed upscaler in their already provided folders within the ComfyUI folder structure as seen here. I recommend using subfolders within these for organization purposes, which can be easily read still by ComfyUI itself.
Lastly, you will need an actual upscaler for the hi-res fix workflow to actually work. You can grab one from OpenModelDB, go with the Remacri one Civit uses, or you can nab this quick and dirty OmniSR one I like because it's fast in processing. Place your upscaler in the 'upscale_models' folder displayed above.
Starting ComfyUI is as simple as double-clicking on run_nvidia_gpu.bat. Once inside, I suggest switching to the the new menu type for ComfyUI - click the cog icon on the top right corner of the menu stack for settings - as I find the original stack design limiting. The "beta" top/down bar GUI is more user-friendly, especially as it provides easier and clearer user options like being able to jump between workflows effortlessly via a popup menu tab, among other things.
While you're in settings I would also suggest to increase widget text font size as the initial default will have you squinting at the prompt boxes in short order.
As said, my workflows are fairly basic because A) I like it that way, and B) it also helps showing you the process of the generation itself on a single screen - I try to avoid excess scrolling whenever I can. I've changed most of the node titles (every one of the gray box is a "node") to help you better understand what they do; yet I left their original titles after the dash so that you will still know what the actual node is called. While pretty much everything comes with tool tips as you hover your mouse over them, some of them may be more helpful than others.
Most of the parts here should be self-explanatory if you've done any generation at all with Civit's own web gen. Pick the checkpoint and (base model compatible) LoRas of your choice. You can remove/add more models by right-clicking on a LoRA box and clicking remove/clone to adjust the daisy chain. While ComfyUI does provide fine-tuning ability of LoRAs with two different weights, I'd suggest having both numbers be the same for any given model for now. Unfortunately, ComfyUI doesn't provide the option to have "none" as a choice for unused loaders, so I zero out the weights for the generator to ignore them. Better that than having to constantly remove/add load nodes.
The prompts work like in any other generator. I already set the height and width to Civit-like resolutions, partially so that those among us who don't have fast PCs (me included) are able to get a decent gen experience. The generator controls are almost identical to what can be found with Civit's web gen. I would suggest sticking with 'euler_ancestral/normal' or 'dpmpp_2m/karras' for the sampler/scheduler or using recommended settings from checkpoint creators. Most others will take longer to process. Make sure to keep the denoise level at 1 for all original gens... or funky stuff is going to happen.
Once you've got all your settings dialed in, hit that 'Queue' button and ComfyUI will do its thing. You will see a progress bar for a batch within the KSampler node as well as the terminal window. Any generated image will automatically be saved to the 'output' folder within ComfyUI's main folder. You can change the prefix for saved images by clicking on it.
If you have an image you like switch to the img2img hi-res fix workflow provided so you can make it look even better. Once again, the set resolution is very Civit-like. You will have to copy your prompts from the original gen workflows via hopping between the two. Make sure your checkpoint base models and LoRAs/weights are the same as your original gen... unless you want to alter the outcome of the hi-res fix. Steps and sampler/scheduler can differ. Lastly, I like to keep denoise below 0.5 and typically within the 0.3-0.4 range as it tends to change too much otherwise. Make sure to load the image you want to hi-res in the bottom left box.
You may ask why we're upscaling so much before downscaling again: much improved image quality of the final output, which is especially the case for lower denoise levels. Once the gen is done, it will save the image in the same 'output' folder as before; this is why you want it to have a different prefix from your original gens for easier sortability. Now that the gen is all done you're ready to upload to Civit or elsewhere.
Be aware:
Only unedited images will carry over their creation metadata... except for resources used which Civit cannot pick up on at all unless more custom node workflows are used. Just pick your preferred model of a resource used for your image and 'Add Post' to their creator's feed. I'm pretty sure they'll appreciate some of the buzz they'll be getting.
Some additional tips:
- Any unedited image created with ComfyUI can be dragged onto an open space within the ComfyUI window, and it will automatically load its original generation settings and workflow.
- I'd recommend saving separate workflows for different checkpoint/LoRA combos. That way you won't constantly have to switch out base settings. Also, a creator's recommended checkpoint base prompt setups - or your own preference for that matter - can be saved that way.
- Once you're more familiar you can easily call up new nodes in your workflow build by double-clicking on any empty space within ComfyUI to do an easy and quick node search.
I very much hope this guide can be of use to some.
(If you wonder why I didn't even broach the subject of inpainting or adetailer at all is because A) inpainting with ComfyUI is awkward and a bit backward and B) the use of a detailer can be a pretty involved process which is beyond the scope of keeping it simple.)