Sign In

SPYBG's Toolkit for Digital Artists

Checkpoint Trained
Mar 1, 2023
Base Model
SD 1.5
Steps: 380,000
Epochs: 100
Trigger Words
default creator card background decoration
Donator Badge
SPYBG's Avatar

SPYBG's ToolKit for Digital Artists

Official YouTube Channel: [CLICK HERE]

Patreon: [CLICK HERE]

Latest Video:

Hello everyone, my name is Valentin from the Bulgarian AI Art Community, but people know me as SPYBG, I'm a 3D Character Artist by profession, been working this for many years now. For everyone curious what I do professionally you can find my artstation here:

I was experimenting with AI like many of you when it first came out. And I wanted to create something that will help me with my creativity for my personal projects and so on ,and eventually i saw the potential for artists to use the thing i was making in a professional environment so.. for the last 2 Months I've been creating custom datasets for characters, and after a request of a close friend of mine who is a Technical Lead in one studio where they make environments, I've decided to make environment dataset as well for my custom model.

Since I know a lot of artists who got upset and so on about "people using their art" I went in a different direction. All of my datasets (training images I created for this) are made by me.. and it took a lot of time to make them. But I was smart and used AI tools to create what I need, so all of my datasets (for characters and environments) are AI generated so no

artists input was used in the making of this model, except my own input.

I trained my model on 100 steps, with 1926 images, the model was trained with 194 000 steps in total. (yea I know it's a lot but the results speak for them self's).

Character Dataset: 766 custom made by me images.

Environment Dataset: 1160 custom made by me images.

Special Thanks to Suspirior! He helped me with some tips and tricks, and also ideas. And he was the first one who Beta-tested my model so, big thanks buddy! I'll include some of his tests as well here.

Tips for using my model:

I would recommend using some of this settings, they provide the best results at least for me. But feel free to experiment.

Sampler: DPM++2M Karras

Steps: 150 steps (lower steps also work but for this training data 150 works the best based on my testing)

Recommended Resolution: 768x768 (The model i used as base for training is a custom modified base of Protogen 3.4 merged with older versions of my toolkit (v2.0), and based on that I've trained my model with 768x768 datasets so, I recommend to use 768x768 and 768x1280, or higher resolutions).

Note: with version 4.0 and above I've used the basic 1-5-pruned model and i've finetuned it properly

CFG Scale: 5 ~ 7 works best

Trigger words: tk-char (for characters) tk-env (for environments) why tk? (tk stands for Toolkit)

IMPORTANT: If you want to get the best results when creating characters use my model in img2img with the images I provided in the templates directory, in order to get much more clean and professional looking images. txt2img while it's great for environments for characters is heavily unpredictable sometimes and when making character concept art we want consistency. So I personally recommend you to use my template images or any of yours. that's why I've provided different character sheets made by me in order to get more consistent results.

Example prompts:

CHARACTER examples:

"photograph of (((male))) tk-char warrior, highly detailed, award winning image, 16k"


"photograph of (((male))) tk-char style warrior, highly detailed, award winning image, 16k"

"photograph of (((female))) tk-char warrior, highly detailed, award winning image, 16k"


"photograph of (((female))) tk-char style warrior, highly detailed, award winning image, 16k"

while you can use tk-char by it's self as a trigger, you can also use tk-char style as well. Try them both, see what results you get.

Note: Include (((male))) or (((female))) in front of tk-char to specify what character you want when creating the prompt. After that use what ever you want to define the prompt better. And also keep your prompts short, while using longer prompts can be cool, check out some of the templates from my images and you'll see how with little you can get decent results.

Also here is a link to some of my "demo" images, use those as templates in img2img or use any of your images. but mine will give you good results if you're making character concept art (there are two versions available, basic full body with different proportions and silhouette that's 1:1 aspect ratio, and closeup with head variations that's 2:1 aspect ratio)

Link to template images: [DOWNLOAD]

Environment examples:

"photograph of tk-env ancient environment style, Persian city, with people walking in it, in ancient Persia, with palm trees in the city, and flowers everywhere, award winning image, highly detailed"

just include tk-env in your prompt to activate the trained data.

I recommend you to add negative prompts for best results, any will work, but here is the one I use.

NEGATIVE PROMPT: (((signature))), (((text))), (((watermarks))), deformed eyes, close up, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry

Note: With my latest release of my model (v4.5 you don't need to use any negative prompts, yes you heard me correct..) but still if you want to use any those are good starting point.


VAE: I would recommend to use the base SD 1.5 VAE from stable diffusion for better results


SD UPSCALE & Ultimate SD Upscale: If you want to upscale generated image I would recommend to use the automatic1111 SD Upscale with value of 0.35 (noise strength) scale of 2 and upscale it with R-ESRGAN General 4xV3

for me this gives the best results.


Since my model is based on 1.5 all embeddings done with the 1.5 model will work fine with my custom model. I'll include some of the great ones with links bellow and update the list while I go.


[SPYBGTK-C-Enh] - My own tool designed to improve even more your character creations used in combination with my model

Note: Lower the strength of the LORA embedding so it fixes some things on your models but it doesn't over-take the design you're going after.

[CharTurner] - Great for generating character concepts from front side and back views (Use it with combination with the templates. (Front_Side_Back) from my template images for even better results!

Note: My model now supports Multiple views of the same character when creating an image in txt2img but still check this addon it's great!


Feel free to use/merge and experiment with my model for anything you want.

If you want to credit me for using it feel free, but its all right. All I want is for people and artists to have something they can use in a production pipeline, or just experiment for fun.

This is the closest I got to making it a possibility.

And yes you can train with this model your own images of you or anything you want.

but i would recommend to do TI embeddings of your own images for additional optimal results.

P.S. share your results ,would love to see what you guys make!


Your friendly neighborhood 3D Character Artist