SPYBG's Toolkit for Digital Artists
Type | |
Downloads | 6,918 |
Last Update | Mar 19, 2023 |
Versions | 5 |
Base Model | SD 1.5 |
Tags | |
Trigger Words | tk-char tk-env |
SPYBG's ToolKit for Digital Artists
Official YouTube Channel: [CLICK HERE]
Patreon: [CLICK HERE]
Latest Video:
Hello everyone, my name is Valentin from the Bulgarian AI Art Community, but people know me as SPYBG, I'm a 3D Character Artist by profession, been working this for many years now. For everyone curious what I do professionally you can find my artstation here: https://www.artstation.com/spybg
I was experimenting with AI like many of you when it first came out. And I wanted to create something that will help me with my creativity for my personal projects and so on ,and eventually i saw the potential for artists to use the thing i was making in a professional environment so.. for the last 2 Months I've been creating custom datasets for characters, and after a request of a close friend of mine who is a Technical Lead in one studio where they make environments, I've decided to make environment dataset as well for my custom model.
Since I know a lot of artists who got upset and so on about "people using their art" I went in a different direction. All of my datasets (training images I created for this) are made by me.. and it took a lot of time to make them. But I was smart and used AI tools to create what I need, so all of my datasets (for characters and environments) are AI generated so no
artists input was used in the making of this model, except my own input.
I trained my model on 100 steps, with 1926 images, the model was trained with 194 000 steps in total. (yea I know it's a lot but the results speak for them self's).
Character Dataset: 766 custom made by me images.
Environment Dataset: 1160 custom made by me images.
Special Thanks to Suspirior! He helped me with some tips and tricks, and also ideas. And he was the first one who Beta-tested my model so, big thanks buddy! I'll include some of his tests as well here.
Tips for using my model:
I would recommend using some of this settings, they provide the best results at least for me. But feel free to experiment.
Sampler: DPM++2M Karras
Steps: 150 steps (lower steps also work but for this training data 150 works the best based on my testing)
Recommended Resolution: 768x768 (The model i used as base for training is a custom modified base of Protogen 3.4 merged with older versions of my toolkit (v2.0), and based on that I've trained my model with 768x768 datasets so, I recommend to use 768x768 and 768x1280, or higher resolutions).
Note: with version 4.0 and above I've used the basic 1-5-pruned model and i've finetuned it properly
CFG Scale: 5 ~ 7 works best
Trigger words: tk-char (for characters) tk-env (for environments) why tk? (tk stands for Toolkit)
IMPORTANT: If you want to get the best results when creating characters use my model in img2img with the images I provided in the templates directory, in order to get much more clean and professional looking images. txt2img while it's great for environments for characters is heavily unpredictable sometimes and when making character concept art we want consistency. So I personally recommend you to use my template images or any of yours. that's why I've provided different character sheets made by me in order to get more consistent results.
Example prompts:
CHARACTER examples:
"photograph of (((male))) tk-char warrior, highly detailed, award winning image, 16k"
or
"photograph of (((male))) tk-char style warrior, highly detailed, award winning image, 16k"
"photograph of (((female))) tk-char warrior, highly detailed, award winning image, 16k"
or
"photograph of (((female))) tk-char style warrior, highly detailed, award winning image, 16k"
while you can use tk-char by it's self as a trigger, you can also use tk-char style as well. Try them both, see what results you get.
Note: Include (((male))) or (((female))) in front of tk-char to specify what character you want when creating the prompt. After that use what ever you want to define the prompt better. And also keep your prompts short, while using longer prompts can be cool, check out some of the templates from my images and you'll see how with little you can get decent results.
Also here is a link to some of my "demo" images, use those as templates in img2img or use any of your images. but mine will give you good results if you're making character concept art (there are two versions available, basic full body with different proportions and silhouette that's 1:1 aspect ratio, and closeup with head variations that's 2:1 aspect ratio)
Link to template images: [DOWNLOAD]
Environment examples:
"photograph of tk-env ancient environment style, Persian city, with people walking in it, in ancient Persia, with palm trees in the city, and flowers everywhere, award winning image, highly detailed"
just include tk-env in your prompt to activate the trained data.
I recommend you to add negative prompts for best results, any will work, but here is the one I use.
NEGATIVE PROMPT: (((signature))), (((text))), (((watermarks))), deformed eyes, close up, ((disfigured)), ((bad art)), ((deformed)), ((extra limbs)), (((duplicate))), ((morbid)), ((mutilated)), out of frame, extra fingers, mutated hands, poorly drawn eyes, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), cloned face, body out of frame, out of frame, bad anatomy, gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), (fused fingers), (too many fingers), (((long neck))), tiling, poorly drawn, mutated, cross-eye, canvas frame, frame, cartoon, 3d, weird colors, blurry
Note: With my latest release of my model (v4.5 you don't need to use any negative prompts, yes you heard me correct..) but still if you want to use any those are good starting point.
____________________________________________________________________________
VAE: I would recommend to use the base SD 1.5 VAE from stable diffusion for better results
____________________________________________________________________________
SD UPSCALE & Ultimate SD Upscale: If you want to upscale generated image I would recommend to use the automatic1111 SD Upscale with value of 0.35 (noise strength) scale of 2 and upscale it with R-ESRGAN General 4xV3
for me this gives the best results.
____________________________________________________________________________
Since my model is based on 1.5 all embeddings done with the 1.5 model will work fine with my custom model. I'll include some of the great ones with links bellow and update the list while I go.
EMBEDDINGS:
[SPYBGTK-C-Enh] - My own tool designed to improve even more your character creations used in combination with my model
Note: Lower the strength of the LORA embedding so it fixes some things on your models but it doesn't over-take the design you're going after.
[CharTurner] - Great for generating character concepts from front side and back views (Use it with combination with the templates. (Front_Side_Back) from my template images for even better results!
Note: My model now supports Multiple views of the same character when creating an image in txt2img but still check this addon it's great!
___________________________________________________________________________
Feel free to use/merge and experiment with my model for anything you want.
If you want to credit me for using it feel free, but its all right. All I want is for people and artists to have something they can use in a production pipeline, or just experiment for fun.
This is the closest I got to making it a possibility.
And yes you can train with this model your own images of you or anything you want.
but i would recommend to do TI embeddings of your own images for additional optimal results.
P.S. share your results ,would love to see what you guys make!
Cheers!
Your friendly neighborhood 3D Character Artist
Valentin
Versions
Rating | (1) |
Downloads | 2,292 |
Uploaded | Mar 01, 2023 |
Base Model | SD 1.5 |
Steps | 380,000 |
Epoch | 100 |
Trigger Words | tk-char tk-env |
Hash | AutoV2 690CB24A47 |
Version 5.0 Patch notes:
Added float16 safetensor model.
Model was re-trained with larger dataset (made by me again) with 3800 images.
V5 is trained with Noise offset (now we have better contrast).
Added ability to create stylized/realistic video game props
Added ability to create stylized game icons
Added ability to create stylized/realistic monster creatures
Baked VAE in model order to train it with the noise-offset for better results (Highly recommended to turn off your default VAE in settings).
Improved the diversity of training data for more interesting results
Rating | (4) |
Downloads | 1,414 |
Uploaded | Feb 11, 2023 |
Base Model | SD 1.5 |
Steps | 44,100 |
Epoch | 100 |
Trigger Words | tk-char tk-env |
Hash | AutoV2 150B8B8F48 |
Everything you see, was done in txt2img
Say wha...? yes, now you can get 3 views of a character in txt2img, I've finetuned my data so you can achieve that now, all you need to add is tk-char to trigger that effect.
Also no negative prompts were used for the creation of all of the images, (check their descriptions to see). The model is finetuned quite enough to not to have to use negative prompts, you can still use them but. It's up to you.
Version 4.5 Patch notes:
Added functioning .safetensors file (it took me to long to fix the previos version while I was working on this one, so I give up making one for v4.0 and I've focused more on fixing the issues and having a functioning one for v4.5 first so everybody will be happy).
tk-char now works as a trigger for "production quality" views for your characters in txt2img. You can get closeups, full body views, and 3 views of the same character.
To get closeup shots for your characters use portrait at the beginning of your prompts for optimal results in txt2img, or use one of the templates that are in the templates directory for img2img as before.
Quality of environments is also improved in this release, I'll be releasing a new LORA embedding that is trained in 16:9 aspect ratio for more cinematic results if you want to use in img2img (coming soon).
Added 10 new templates to use in img2img if you prefer to work this way, also don't forget I've uploaded a ton of templates that may fit many of your needs when making concept art.
Rating | (4) |
Downloads | 1,379 |
Uploaded | Jan 28, 2023 |
Base Model | SD 1.5 |
Steps | 400,000 |
Epoch | 100 |
Trigger Words | tk-char tk-env |
Hash | AutoV2 F277C5BBA1 |
To show my gratitude to everybody who are using my model this is for you: https://www.youtube.com/watch?v=HM5aWE-KuR8
I'll be making Tutorials soon discussing my workflows and share some tips and tricks from an Artist's point of view. Hopefully those will help you with your creations.
Version 4.0 Patch notes:
changed trigger word from tk_char to tk-char
changed trigger word from tk_env to tk-env
improved overall image generation even without using trigger words (by finetuning the crap out of this model... but for best results depending on are you making a character or environment I highly recommend to use the proper tag).
Better character generation in txt2img tab (I've finetuned the model from scratch to be able to produce more character concept-art like poses and stuff, but I would still recommend for consistency to use img2img with my templates. speaking of...
New templates to use in img2img. Also updated the link in the Model information page.
Improved environment training dataset (I've created additional 200 highly detailed images for finetuning that improved overall image quality).
Improved character training dataset (created 200 additional highly detailed images to teach the model to create front and back views of the same character more effectively, also that improved character creation in txt2img tab).
Temporary removed .safetensor reason: Some users experience difficulties while using it, I've removed it till I've find a solution to the problem..
Fixed issue that causes embeddings not to work properly with the model. (I've shifted back some weights from the original 1-5-pruned.ckpt, now you should get proper results when using embeddings from Civitai or from your own if they were trained on base 1-5-pruned model).
Transitioned from 1-5vae VAE File to vae-ft-mse-840000-ema-pruned (Simply gives better results so I highly recommend downloading that one from google)
Created YouTube channel after many of your requests how do i get so much detail in my images, I've decided to start to make videos explaining my processes and show you some tips and tricks while we're at it, so feel free to subscribe if you're interested in watching that kind of content.
Started plotting world domination for our A.I. overlords (just kidding).
Rating | (12) |
Downloads | 1,580 |
Uploaded | Jan 10, 2023 |
Base Model | SD 1.5 |
Steps | 194,000 |
Epoch | 100 |
Trigger Words | tk_env tk_char |
Hash | AutoV2 19A0B2A815 |
Update: Added SAFETENSOR file format for everybody who wanted it
Characters: Improved fidelity in character generation, now you will get even more clean details. Very useful when inpainting small areas in img2img.
Environments: Added little bit of realism to the painterly look and feel of my model. Now details of trees, buildings and other objects will look better.
Updated templates folder for img2img, I've added a lot more images for in the front_side_back folder that you can use with my model to get more professional looking and clean results.
Rating | (4) |
Downloads | 258 |
Uploaded | Jan 08, 2023 |
Base Model | SD 1.5 |
Steps | 194,000 |
Epoch | 100 |
Trigger Words | tk_char tk_env |
Hash | AutoV2 750D41EDD7 |
Improvements on my previous versions of my toolkit that were in small beta testing by some friends.
Since the model is modified version of ProtoGen 3.4 for basic training model, the prompts for cameras should work here as well.