Sign In

Kobold LLM Prompter Installation & Setup Guide

4

Kobold LLM Prompter Installation & Setup Guide

This guide will help you get my node working and show you how to transform simple ideas into image prompts using local Large Language Models (LLMs).

With Version 1.4, you now have absolute control with the new User Custom Mode and Dynamic Multi-Phrase Filtering.

🛠️ 1. Quick Installation & Setup

A. The Backend: KoboldCpp

  1. Download KoboldCpp (GitHub Link).

  2. The Model: Download a model in GGUF format.

  3. Run KoboldCpp, load your GGUF file, and ensure the API is active (default: http://127.0.0.1:5001).

B. The ComfyUI Node (Simple Setup)

  1. Navigate to your ComfyUI directory: ComfyUI/custom_nodes/.

  2. Just drop the KoboldLLMPrompter.py file directly into the custom_nodes folder.

  3. Restart ComfyUI.

C. The Wildcards Folder

To use the built-in wildcard system, the node needs a place to look for your text files:

  1. Go to your main ComfyUI root folder.

  2. Create a new folder named wildcards.

  3. Drop your .txt files (e.g., colors.txt, styles.txt) inside this folder.

    • Note: Version 1.4 supports subfolders! You can organize them like /wildcards/landscapes/mountains.txt.


🧠 2. Generation Modes

Choose the right "brain" for your model:

  • SDXL (Tags): Perfect for Pony/XL models. Generates descriptive Danbooru-style tags.

  • Natural Sentence: Optimized for Flux.1, SD3, and Midjourney-style prompting.

  • Z-Engineer: High-detail technical staging (200+ words) focusing on optics and textures.

  • Thinking: Designed for "Reasoning" models. It automatically hides the <think> process.

  • User Custom (New!): Use the sys_msg input to provide your own instructions. Total freedom.


⚡ 3. Integrated Smart Wildcards

No extra nodes required. Just type directly into the user_input:

  • Basic: __colors__ picks one random line from colors.txt.

  • Multi-Pick: __styles__+3 picks three unique items from the list.

  • Brackets: {cat|dog|bird} picks one randomly using MD5-hashing.

  • Stable Seed Logic: If you use a Fixed Seed, your wildcard choices stay the same every time—perfect for consistent testing!

  • Auto-Pooling: The engine cycles through your entire text file before repeating a word. No more "blue eyes" five times in a row.


🧹 4. Advanced Filtering (v1.4)

AI models sometimes "chatter" (e.g., "Here is a prompt for you...").

  • filter_plus: Manually ban specific words or phrases.

  • How to use: Enter terms separated by commas (e.g., Assistant, Prompt:, my creative result). The engine will dynamically strip these from the final text.


🚀 5. Pro Workflow: The "Prompt Factory"

Don't just generate one image—build a library!

  1. Connect the Prompter to my Wildcard Saver node.

  2. Set your ComfyUI batch count to whatever you like (I usually go with 32)..

  3. The LLM will write 32 unique, high-quality prompts and save them to a file.

  4. Later, use a simple Wildcard Loader to feed those "curated" prompts into your generator.

Tip: Since the text doesn't show up inside the 'Clip Text Encode' node box, use a 'Show Text' node or check the Console Log to see your results!

4