ComfyUI Workflow
Requires: ComfyUI_SmartLML v3, ComfyUI_Eclipse
its a very basic workflow that uses 2 nodes from SmartLML which supports 8 Backends now:
transformers
gguf (llama-cpp-python)
ollama (docker), vllm (docker), sglang (docker), llama.cpp (docker)
wd14 (onnx)
yolo
One for a image description (VLM / WD14) or text input (LLM)
One for the Detection (Face, Eyes, Hands, and other areas ;) using Florence2 and Yolo (qwen detections are meh... but its under construction)
try claude for image to prompt descriptions:
In most cases, the model doesn't mince words and describes exactly what it "sees".
transformers: https://huggingface.co/huihui-ai/Huihui-Qwen3.5-9B-Claude-4.6-Opus-abliterated
ollama: huihui_ai/qwen3.5-abliterated:9b-Claude


