My version of telegram bot for ComfyUI
text2image, image2image work. LoRA support available
Test bot: @stablecats_bot (not always online)
Github repo: https://github.com/zlsl/comfyui_telegram_bot
Bot usage
By default, when receiving a clean prompt, an image with dimensions DEFAULT_WIDTHxDEFAULT_HEIGHT is generated; the size can be specified in the format WIDTHxHEIGHT
The response from the bot will be two messages: a picture (with loss of quality due to telegram compression) and a PNG file with original quality.
If you send a picture to the bot, the picture will be converted, according to the prompt; in this case /face
, the commands /upscale
also work. Important! For img2img, COntrolNet is used, and not classic denoise, which allows you to give a result as close as possible to the original.
To add your own negative prompt instead of the built-in one, you can add it to the message using a separator|
LoRA is connected by adding to the prompt #имя_LoRA. For example#vlozhkin
Installation
pip install -r requirements.txt
create config.yaml from sample and set your telegram bot TOKEN
run bot with command:
python comfyui_tgbot.py
How to get TOKEN:
Token is a string that authenticates your bot (not your account) on the bot API. Each bot has a unique token which can also be revoked at any time via @BotFather.
Obtaining a token is as simple as contacting @BotFather, issuing the /newbot command and following the steps until you're given a new token. You can find a step-by-step guide here.
Your token will look something like this:
4839574812:AAFD39kkdpWt3ywyRZergyOLMaJhac60qc
A working installation of ComfyUI with additional modules is required:
ComfyUI-Impact-Pack
ComfyUI_UltimateSDUpscale
It is also necessary to install models for segmentation: face_yolov8m.pt
Upscaler: 4xNMKDSuperscale_4xNMKDSuperscale.pt
ControlNet model: control_v11f1e_sd15_tile.pth
Rename the config.yaml.sample file to config.yaml and customize it for yourself:
network:
BOT_TOKEN: 'xxx:xxxxxx'
SERVER_ADDRESS: "127.0.0.1:8188"
bot:
TRANSLATE: True
DENY_TEXT: "Access denied"
HELP_TEXT: "You can use text in any language to generate
By default, the image is created in a resolution of 512x512 pixels
In the prompt you can specify the size WIDTHxHEIGHT. For example - 1024x512
To add a negative prompt, add it to the end of the message using the '|' separator.
Teams:
/upscale .... - creates a high-resolution image
/face .... - corrects facial defects"
comfyui:
DEFAULT_MODEL: 'revAnimatedFp16_122.safetensors'
DEFAULT_CONTROLNET: 'control_v11f1e_sd15_tile.pth'
DEFAULT_VAE: 'vaeFtMse840000Ema_v10.safetensors'
DEFAULT_UPSCALER: '4xNMKDSuperscale_4xNMKDSuperscale.pt'
SCHEDULER: 'karras'
SAMPLER: 'uni_pc'
SAMPLER_STEPS: 30
TOKEN_MERGE_RATIO: '0.6'
CLIP_SKIP: '-1'
CONTROLNET_STRENGTH: '1.0'
DEFAULT_WIDTH: 512
DEFAULT_HEIGHT: 512
MAX_WIDTH: 2048
MAX_HEIGHT: 2048
BEAUTIFY_PROMPT: ',masterpiece, perfect, small details, highly detailed, best, high quality, professional photo'
NEGATIVE_PROMPT: 'low quality, worst quality, embedding:badhandv4, blurred, deformed, embedding:EasyNegative, embedding:badquality, watermark, text, font, signage, artist name, text, caption, jpeg artifacts'
whitelist:
loras:
- 'vlozhkin|vlozhkin3.safetensors|1|vlozhkin style illustration'
- 'jh|jamie_hewlett_style.safetensors|1|jamie hewlett style'
- 'minecraft|minecraft_square_style_v2-10.safetensors|1|minecraft square style'
- 'giardino|Giardino_Style-13.safetensors|1|giardino style illustration'
How to add your workflow
The following files are used:
t2i.json - basic text2image
t2i_upscale.json - text2image with upscale
t2i_facefix_upscale.json - text2image with upscale and face fix
i2i.json - basic image2image
i2i_upscale.json - image2image with upscale
i2i_facefix_upscale.json - image2image with upscale and face fix
In ComfyUI you need to enable dev mode (in the settings), the Save (API Format) menu item will appear
In workflow you need:
In the text with ClipTextEncode, set the value for the positive prompt
positive prompt
In the text with ClipTextEncode, set the value for the negative prompt
negative prompt
For image2image in the json file code, set
LoadImage
the value "inputs" - "image" in the source image block