Sign In

How to use my workflows. A Comprehensive Breakdown & Troubleshooting Guide

28

Jan 9, 2026

(Updated: 3 hours ago)

workflows
How to use my workflows. A Comprehensive Breakdown & Troubleshooting Guide

Updated April 9th, 2026

Introduction:

I got into ComfyUI about year ago. I'm an engineer and a carpenter, so I get a lot of enjoyment out of building really cool stuff. Hopefully you can appreciate my efforts.

In this guide, I'll cover the following:

  1. A brief overview of each section and it's capabilities.

  2. Setup & Installation of custom nodes, including errors or issues.

  3. Initial setup and run, including a breakdown.

  4. Controlnet and I2I, Regional prompting, Face Swap, and IP Adapters

  5. Hi Rez fix, Upscalers, Detailers, & Post Production

  6. Troubleshooting and common issues.

  7. Advanced Options.

1. Overview

I've used my Illustrious workflow as an example for this. They are all similar. I will break down the differences in the detailed area.

  • Prompt assist and controlnet module:

    Screenshot 2026-03-29 095439.png
  • Main user input area:

    Screenshot 2026-01-09 045341.png
  • Controlnet:

    Screenshot 2026-03-29 095651.png
  • Draft Area (applies only to certain workflows):

    Screenshot 2026-01-09 050428.png
  • Regional Prompting area (note: I have another article about this)

    Screenshot 2026-03-29 100535.png
  • Face Swap

    Screenshot 2026-03-29 105048.png
  • Hi Rez Fix/ Ultimate Upscaler:

    Screenshot 2026-03-29 095754.png
  • Detailer Group

    Screenshot 2026-03-29 095942.png
  • Seed VR2 Upscaler:

    Screenshot 2026-03-29 100032.png
  • Post Production Suite:

    Screenshot 2026-03-29 100136.png
  • Save Group:

    Screenshot 2026-03-29 100236.png

2. Setup & Installation:

Notes:

  • Start with this section when initially loading. I have them in both English and Chinese.They are broken out as follows:

    • Basic instructions

    • Directions for each section

    • Useful nodes and downloads

    • Folder location

  • This area should answer the majority of your questions on how to use. You will also find detailed instructions in each Module

Screenshot 2026-01-09 042137.png

Custom Nodes:

  • This is a very node heavy workflow. The initial "You are missing the following nodes" message can be intimidating. This is because I basically have 8 workflows in one, so instead of several small "you are missing" messages, you get one very large one. Take the time to download and install them. You'll need most of them for other people's workflows as well.

  • Custom Node Manager:

    • Almost all of the missing nodes can be found here. Just click on the "Install missing custom nodes" button in the ComfyUI manager.

      fgtr.png
  • Missing Nodes:

    • I have tried to go through and remove all the specialty nodes that are not in the current manager.

    • Go to the "Notes" section. Here you will find a list of the repositopries for all of the nodes:

      Screenshot 2026-01-09 053033.png
    • Click on this button here to download it with GIT

      Screenshot 2026-01-09 053315.png
    • Once downloaded, go to the " ComfyUI_windows_portable\ComfyUI\custom_nodes\(Name of custon node folder) " and open it.

    • Open up the Command prompt (windows 11, right click and "open in terminal") and type ' python.exe -m pip install -r requirements.txt '

      Screenshot 2026-01-09 042344.png
    • The requirements should all download and install:

      • Note: The desktop version (which you should abandon and go to portable due to the ComfyUI upgrades) will sometimes give you a "Numpty" error. This means you are running on a newer version of Python than is supported by ComfyUI. You will get this error globally.

        • Option 1: Create a Venv (virtual enviornment) based on the version of Python ComfyUI is running

        • Option 2 (overwhelmingly the best option): Switch to Windows Portable.

      • Some node folders do not require this. Look for the "requirements.txt" file.

  • Detailers:

    • YoloV8 models. what's the difference? Why does it say some are dangerous?

    • The "n" in YoloV8N stands for "nano". It's a pruned version of the "s" model. It runs faster. You'll find the 'n' model particularly helpful in the face detailer. I recommend trying the "s" models first.

    • the ".pt" stands for "pickletensor". they can be modified to be corrupt. Huggingface scans and labels these. Do not use any from another site!!!!

  • Folder Locations:

    • Almost everything I use can be downloaded directly from ComfyUi except models. YI try not to use custom nodes other than what they provide in their manager.

    • At the bottom of the main notes section is a folder location area:

      Screenshot 2026-03-29 101526.png

3. Inital Setup and Run:

  • ๐Ÿ›‘๐Ÿšซโœ‹Do not manually bypass anything! There are switches for everything.

  • In the center of every workflow (in whatever color I was feeling that day) You'll find the Master Bypasser. I'll go through each grouping. Note that they may be slightly different, depending on the workflow and modules:

    Screenshot 2026-01-09 055312.png
    • Initial usage. I'll initially start out with the bypasser looking like this:

      Screenshot 2026-03-29 101035.png
    • If you are using prompt assist, then the Image (or I2I) and Prompt assist need to be on. Otherwise, leave them off

    • ๐Ÿ’พ Nothing saves unless it is turned on!

      • I almost always leave the "Save Draft" switch on. This way if I mess up, I can just reload the workflow in that image.

  • Models: Each Platform has it's own unique section, but generally they are all the same. This is where you make your intial selections of what you want to bake with.

    fdsreg.jpg
    • A lot of my workflows have the toption for checkpoint swaps using this switch (or one similar. Note: the other checkpoints do not necessarily have to be bypassed for it to work

      Screenshot 2026-03-29 101818.png
  • Prompt area:

    • Enter your prompts here.

      Screenshot 2026-03-29 102132.png
    • โš ๏ธ IF you are using Prompt assist, leave blank or put any changes only.

    • Note: If you are using the regional prompter, this is a global prompt only!

  • Qwen Prompt enhance Note: if this node does not work for you, you need to update ComfyUi (11-2-2026)

    Screenshot 2026-01-24 111024.png
    • 2b instruct model works best

    • the quant (in yellow): determines the speed of the model and will also shorten your promt. Even high VRAM just leave it at 4bit. You gain nothing.

    • The prompt instructions (in red): Detrermines how the LLM operates. I have options in the notes if you wnat to change it. It operates differently for each model.

    • Note: See troubleshooting guide if you are having problems installing This.

  • Lora Loader:

    Screenshot 2026-01-09 064250.png
    • This loader does NOT need trigger words (i.e. JEDDTTL2). It does it automatically.

    • You can have as many as you want loaded. Just turn them on and off. Saves a lot of time

    • Weight is distributed evenly across the model and clip

      • Note: In some special cases (such as lightning Loras and Control Loras), you want them to only cross the model. If that's the case (such as my WAN models), you'll see a seperate Lora Loader for those.

    • ZIT: ZImage hates a lot of added crap. When experimenting, use one at a time, then add the other. Don't set them to 1.0 or below 0.4

    • โš ๏ธ Loras are noise. The more weight you put on a model, the dirtier it gets. If you are trying to get detail, use a workflow that has detailers or Sigma schedulers in it.

      • I have several options for this.

  • Draft Mode (only in certain workflows)

    • โš ๏ธ THIS DOES NOT WORK UNLESS YOU HAVE THE SEED FIXED!!!!!!!! It will just stop here and you cannot get past it!

      Screenshot 2026-01-18 110158.pngScreenshot 2026-01-09 065938.png
      • At the top of the draft window (or in the Sampler area) is this set of controls. The play sound will "beep" when the image is baked to this point and teh run will stop here until you hit the Stop/Pause button.

      • The draft image gets saved.

      • Make any corrections before going forward. This includes adding (or turning off) any detailers/ hi rez fix/ Seed VR2, etc.

  • This should cover everything. You will find additional information in the "Notes" section as well as in each area.

4. Controlnet, Regional Prompting, Face Swap, IpAdatpers:

Controlnet

  • Basic Controls:

    Screenshot 2026-01-24 112345.png
    • In the main bypass relay area, the top area of switches controls this (They differ fropm model and version):

    • Load Image:

      Screenshot 2026-01-24 113153.png
      • This image controls the Florence prompt assist as well as the Controlnet

      • ๐Ÿ’กThe (down)load Florence model shoudl have a dropdown menu if it is blank. Select the 'Large Prompt Gen" option. It should automatically download it for you

      • Remove background:

        • The base model seems to do well, but there are breakdowns in the notes in that area, especially regarding Anime and humans.

    • Image Prompt (or florence Prompt, depending on model):

      Screenshot 2026-03-29 102615.png
      • This area generates a prompt (or tags ,depending on the model and how I have it set) from the Image.

      • Screenshot 2026-03-29 102824.png

        When choosing what model to download, use a "PromptGen one"

      • The selection on the Florence2Run areas should be "prompt_gen_mixed_caption" for Illu/Pony/SDXL, and either "detailed caption", or "more detailed caption" for Flux, Zit, or other models that use natural language.

  • Controlnet:

    Screenshot 2026-01-24 112321.png
    • You have the option to create a mask or load your own. Instructions are in the notes in that area.

  • Using Controlnet:

    Screenshot 2026-04-03 141952.png

    You have several differnt options for controlnet. Here are versions of the most frequently used and why:

  • Depth: Very good for mimicing the character and clothing, but not as struct as Canny

  • Canny: Locks in fine details, but sometimes that can be problematic.

  • Open Pose: when all you want is the pose of a character, but it can get confused when you dont have a straight on view

  • There are many different options. Play around with them in the drop down menu:

    Screenshot 2026-04-03 142112.png

Regional Prompting

  • RGB Prompting:

    Screenshot 2026-03-29 110343.png
    • Really good for multiple characters when you want each one to be different.

    • Illu 8 regional prompt run With Metadata 2026-03-14-210509.png

      ๐Ÿ’กUse this in conjunction with the Multi-face detailer

  • Regional Prompt by mask:

    Screenshot 2026-03-29 110751.png
    • Really good when you want to segment certain things in an image.

      2a01e531-755d-4608-a1f3-4dd6e7568972.png
    • Right click and use the "open in mask editor tab to create the mask

      Screenshot 2026-03-29 111103.png
    • If you use the Use ๐Ÿ–ผ๏ธ for ๐ŸŽญ (Image for mask). It grabs the image from the upper area. If you do not use it, it generates a white anvas already properly sized foe the image.

      Screenshot 2026-03-29 111355.png

IP Adapters

  • Screenshot 2026-03-29 112003.png

    Style and composition:

    • determines teh style of the image (mainly for Anime)

    • Composition will help as a guide to arrange the scene

  • Face Enable:

    • This is different from Face Swap. This takes key elements from teh face and uses them as embedding. It helps LoRas as well as Face Swap and also will pick u on hair style and color when not prompted above

Face Swap

  • Screenshot 2026-03-29 105048.png

    This does what it says.

    • Interpolation seems to make a difiference in Flux. I didnlt notice much change in SDXL based models

    • ๐Ÿ’ก Use the eye detailer (carefully prompted) after this. it leaves a wierd ring around them

    • Will not handle occlusions (i.e. a big fat candy bar in her mouth)

    • See the note on NSFW

5. Hi Rez Fix, Detailers, Seed VR2, Post Processing:

  • Hi Rez Fix:

    Screenshot 2026-01-09 071135.png
    • There are several different versions of this as each model works slightly different:

    • Upscale Model:

      • which upscaler you use matters.

      • I prefer Anime sharp for Anime

      • Remecri, or Ersgan for Realistic people

      • Nickleback for a good low VRAM all around

      • 4x Ultrasharp for cyber/ architectural/ non human.

      • NMKD if I want to increase the size of the image.

    • Image size slider:

      • This determines your tiling sizes (how big or small it breaks the image up to work with)

      • 1 is none (Heavy VRAM)

      • 1.5 is still okay (8gb)

      • 2.0 for 6gb or smaller machines (still kinda okay)

    • Upscaling factor (Hi Rez Fix):

      Screenshot 2026-01-09 072017.png
      • Under every Aspect ratio in the main area should be a slider that says either prescale or Hi rex fix. This determines how much UPscaling happens in the Hi rez fix model. I've found that moving it up TOO high causes issues down the line. 1-1.5 seems to be best.ย 

      • Let SEED VR2 do most of the heavy upscaling work.

Detailers:ย (SFW Version shown)

  • Screenshot 2026-01-09 050031.png
    • ZIT & Flux2 workflows:

      Screenshot 2026-01-09 072722.png
      • As the base model does not work with the detailers, you will have some version of the models here:

      • Checkpoints: Make sure to adjust the checkpoint to match the style you are using

        • I.e if you are doing Anime, it will look wierd if she has a realistic face, and vice versa

    • LoRa loader (area specific)

      • Screenshot 2026-03-29 103747.png

        These are area specific and specific to the detailer model. You do not necessarily need them if they are up top.

      • ๐Ÿ’ก It is sometimes helpful on models like ZIT to use a crossover character LoRa depending on what it is

    • Detailer (hand, eyes, face):

      Screenshot 2026-01-09 073219.png
      • The Uralytics detector determines what is being caputred

      • Denoise slider: This is the amount of noise added to each Image. The more noise, the more it changes the actual image.

        • I typically set it at 40-50%

      • Prompt: (e.g. eyes): This is really good for adding detail to eyes (i.e piercing green eyes, looking at viewer, half closed). Leave blank in normal circumstances.

    • Expression Detailer (Depracated in most models)

      Screenshot 2026-01-09 080501.png
      • There are two options:

        • Use the image and it wil match it (turn the switch on).

        • Adjust the settings.

        • โš ๏ธ Will give you an error if there is no image (when not bypassed).

        • โ›”โœ‹๐Ÿ›‘ This is for human faces only! It will cause an OOM error (right after the eye detailer and before it even starts) if you push something else through it. Make sure to bypass it.

SEED VR2 (Upscaler):

  • Screenshot 2026-03-29 100032.png
    • The first time this runs, it will download the models, so it takes a few minutes:

    • The Upscale factor is controlled in the Aspect Ratio node in the main area:

      Screenshot 2026-01-09 072017.png

    • VAE settings: Encode and decode tiles determine the size of the tile it works with. The smaller the number, the lower the VRAM it requires, but the more overlap and integrity it loses.

    • See instructions in "Notes"

    • High VRAM users: Turn off Tile encode and Decode.

Post Processing (my favorite):

  • Screenshot 2026-01-09 050618.png
    • This takes an image from okay to Epic. I'll touch on the high points, but read the notes in the lower right corner for more detail.

    • Smart Effects (Denoise):

      Screenshot 2026-01-09 075711.png
      • Either in the upper lefthand corner or the last node (later versions will have in the upper lefthand corner) This node removes excess noise from an image.

    • Color Match to Image:

      Screenshot 2026-01-09 080029.png
      • This will match to whatever image you have loaded.

      • โš ๏ธ Will give you an error if there is no image (when not bypassed)

    • Read the notes in that section for more information.

    • ๐Ÿ“ทOptical Realism

      Screenshot 2026-03-29 104153.png
      • These are the basic settings for it. I HIGHLY suggest visiting the website to understand it.

Save Nodes:

  • Screenshot 2026-03-29 100236.png
    • Nothing saves unless it is turned on!

    • The "create Subfoler" names a subfolder within the Metadata foler after the prefix you named your file.

    • You can change the other folder locations by just renaming them (the tan nodes in the pic above)

    • Metadata node automatically saves the information to post on CivitAi

      • Note: ComfyUI broke teh node I normally use, which has caused problems across old workflows. I am slowly phasing them out.

      • If you run into an issue with the "save with metadata" node in the older workflows, you can just replace it woth a normal "save image" node

6. Troubleshooting errors & Common issues:

  • ๐Ÿ›‘โ›”๐Ÿšซโš ๏ธ I have found that almost all issues are related to the following:

    • You need to update.

      • Open your manager and click "Update all"

    • You are on desktop version

      • copy your log and paste it into Grok or Claude (Grok does a better job) and follow the instructions.

  • How to deal with "Import failed" errors

    • Your WIndows Manager shows you something like this:

      Screenshot 2026-02-07 204256.png
    • Go to the subfodler in yor custom-nodes foler that matches the name listed in Blue under the "Import failed" tag

    • Look for this file

      Screenshot 2026-02-07 205129.png
    • Right click and click 'Open in terminal from the drop down menu:

      Screenshot 2026-02-07 205420.png
    • type "python.exe -m pip install -r requirements.txt", then hit enter

      Screenshot 2026-02-07 205713.png
    • It will launch and fufill the requirements. Pay attention to what the colored text says at the bottom. If you restart Comfyui and the module does not launch, then feed yout log to Grok or Claude to decipher

      Screenshot 2026-02-07 205744.png


Basic Troubleshooting issues are listed below.

  • โš ๏ธ Florence model precision and attention matters, even on large cards. It shoudl push out a prompt in under 5 seconds even on low VRAM. If you are having errors, mess with those settings.

  • Black or Noise only images

    • If you are using Flux2, Z-Image Base, or Klein, turn off sage attention. This includes if you have it built into your startup (portable)

    • Your steps are too low

    • Your set clip last layer is at -1 when it should be at -2 (or the other way around)

  • Image empty (4 locations)

    • Load Image (or I2I)

    • Load Mask

    • Expression

    • Color Match (post processing)

  • "ModelPatchLoader node : Cannot access local variable 'model" where is not associated with a value"

    • Update ComfyUI to the nightly version, which includes the necessary fixes:

      1. Open ComfyUI Manager (if installed).

      2. In the left column under "Update", switch from "ComfyUI Stable Version" to "ComfyUI Nightly Version".

      3. Click Update/Restart ComfyUI.

      4. Fully restart ComfyUI after the update.

      Alternative methods if using portable/manual install:

      • Run "git pull" in your ComfyUI directory to fetch the latest changes.

      • Or run the update_comfyui.bat (Windows) or equivalent update script.

  • Impact Pack Issues:

    • 'DifferentialDiffusion' object has no attribute 'apply'

      • This happens befopre the detailers.

      • You need to update the Impact Pack

      • If this does not work for you, place a differential diffusion node between teh mopdel and teh model input.

        Screenshot 2026-02-04 204537.png
  • AILab_QwenVL_PromptEnhancer:

    • The model shgoudl be 4_b instruct and use teh Ram friendly version

  • AttributeError: 'coroutine' object has no attribute 'outputs'.

    • This is the "Save with Metadata node" in teh save section. ComfyUI broke it in Macrh of 2026. I will be replacint it in the next update. You can swap it for a regular save image node

  • [GetNode] โœ— Variable 'Upscale Image' not found! Available: Height, Width, Upscale factor Tip: Make sure SetNode runs BEFORE GetNode in the graph.

    • You need to update your nodes. This error is specific to KJ nodes, but they automatically update, so most likely you have more than that whioch needs to be updated. Open the manager and click 'Update all"

  • !!! Exception during processing !!! PytorchStreamReader failed reading zip archive: failed finding central directory

    Traceback (most recent call last):

    File "G:\comfy\comfy_v0.10.0-default_cu\ComfyUI\execution.py", line 518, in execute

    output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)

    • Sam Model it outdated for the detailers. This is a new issue as of Jan 31st, 2026

  • Aything everywhere node:

    Screenshot 2026-01-24 110420.png
    • These broadcast several items throughout the workflow. You can tell they are working when what they are sending is highlighted like so:

      Screenshot 2026-01-24 110646.png
    • If they are not highlighted. Make sure you have updated

  • Florence will give you an error if you do not have "Load Image" Turned on

    • ๐Ÿ’กThe (down)load Florence model shoudl have a dropdown menu if it is blank. Select the 'Large Prompt Gen" option. It should automatically download it for you

  • The "Save Mask" node will turn red and give you an error if Controplnet is of and you have 'Save Mask" switched on.

  • OOM errors

    • Hi res fix:

      • turn the upscale down or increase the tiling

    • Detailers:

      • It happens occasionally, especially with the face one. Just hit run again.

    • Seed VR2

      • Read the notes in that area.

      • Decrease upscale resolution

  • Flux 2 workflow:

    • you will get Mat errors if you do not have teh correct text encoder and model combination

      • Flux 2 Dav goes with Mistral

      • 2 Klein 4b goes with Qwen_4b

      • 2 Klein 9b goes with Qwen_8b

    • Loras not working:

      • At the time of this publication, Flux 2D loras do not work with the Klein models. You need klein Loras.

  • ZIT all in one:

    • When using the checkpoint: AttributeError: 'Linear' onject has no attribute 'weight'

      • This stems for an issue with ComfyUI. It is a mismatch between Clip encoders and it happens at the detailers. Just restart.

7. Advanced Options:

  • This workflow is modular.

    • You can take any part of it and move it to another workflow.

    • You can change the order in which groups are processed

    • You can add or delete sections entirely.

  • Getset Nodes:

    Screenshot 2026-01-09 093020.png
    • Each section has a set of nodes that is relilant on the other: Common nodes include:

      • Seed

      • Scheduler

      • Prefix

      • Height

      • Width

      • etc.

    • Module swap:

      Screenshot 2026-01-09 093350.png
      • In each section is a Get Image and a Set Image Node. Yo can change the order or completly delete a section simply by adjusting these. For example.

        • The image above is the detailer. It gets its Image from the Hi Res (Upscale Image) then labels it as Detail Image.

        • If I were to change the GetNode to anther module (say Seed VR2), it woud process it after that. Bear in mind you would have to change the other module as well.

      • You can also completly remove modules in the same fashion.

Summary

I hope that this was helpful and coveed everything. If you have any questions, Please message me.

Also, if there are errors in this or you have an idea on how to improve it, I always welcome feedback.

Instagram: https://www.instagram.com/synth.studio.models/

This represents many of hours of work. If you enjoy it, please ๐Ÿ‘like, ๐Ÿ’ฌ comment , and feel free to โšกtip ๐Ÿ˜‰

Thanks,

ddeaa3a7-74e1-4be6-bec0-adbcbb4bfea4.png

"True Nothing is. Permitted Everything is"- Yoda Ezio, Assassin's Wars

Jay (Lonecat)




28