How to get the most out of Alpha Tango:
In this guide, I will be going over small tips and rules when using the Alpha Tango model on the majority of commonly-used site image generation services, that will:
Improve character gen accuracy.
Improve detail generation.
Get the most out of low resolution generation.
Reduce resource bloat.
I'll do my best to keep it short, so let's begin.
Usage Tips for Specific Characters and Accurate, High Quality Output at Low Resolution:
Choose character for prompt (Renamon, Eevee, Lucario, etc.).
Visualize what details you want in the image, what pose, background, supporting details, and what perspective you desire.
Understand the amount of space (resolution) you're working with; use a resolution that conforms to the pose and try to match perspective to center focus on the main subject of the image for best results. Full-body images at low resolution will come out at a lower quality since there's less pixels to work with. Choosing an incorrect perspective for image focus (i.e. rear view when you want a view of the front or of details that predominantly generate on the front of a character) will heavily alter image and gen accuracy and runs a high chance of ruining the image.
For high resolution generation (exceeding 1536x1536), you can avoid deformation of the body by prompting for more details pertaining to the character and what details are in the image (full-body, lying on side on bed in bedroom, digitigrade legs, claws, paw on hip, feet, smiling, <background details>, etc.), the less details you leave open-ended at higher resolution the less likely deformation will occur.
To avoid multiple character generation when unwanted, putting "solo" or "solo focus" will fix the issue the majority of the time.
Avoid prompting for details that are not visible at the angle or perspective being used.
Avoid Lora Usage.
Loras will heavily alter gen output to be more inline with the concept knowledge in the Lora's dataset. If the Lora has bad tagging, a bad or inflexible dataset, or a specific unwanted style, there's nothing that can be done other than not using the Lora. Mixing multiple Loras or using high strengths can absolutely devastate the output of a gen, but the inverse can also be true- some Loras are just bad or don't mesh well with the model being used. Just one incompatible Lora can ruin the image and there's no fixing it other than going without the Lora when using most site generation services. Sometimes, however, there is very little to no information on a character you're prompting for, or you're trying to prompt for an item such as a gun or a sword. That's when using a Lora is necessary, just understand that in those cases 9/10 times you're only going to get images that match the images in the dataset.
Know how to prompt for the character.
What do I mean when I say this? This is what I mean:
These two images are the exact same, with the only difference being the prompting tag for the character. As can be seen, there's a big and visible difference between the images.
Let's do it again.
The difference is small, but upon study you can notice that accuracy has increased with the addition of the paw spikes and chest spike.
Let's do it again but with some additions.
Again, the result is the same, gen accuracy has increased for the character.
Why?
Typing <Lucario> will give you a Lucario, but there's hundreds of different OC's and thousands of different ways people have drawn a Lucario. This can result in spotty accuracy and incorrect outputs.
Using this format will help guide the model and greatly increase gen accuracy with whatever character you are prompting for:
<subject_(media), subject_(media) **eye color*, subject_(media) <insert any complimentary details>, *continue with prompt**>
Generally, you want to use the tags for the character used on sites such as E621, Rule34, or E926 as that will greatly improve accuracy. For large prompts, you want to prompt for as many reinforcing details in that format as possible as the shot gets wider, more of the character is visible, and as the gen gets more complicated.
With enough practice, you can get raw gens at 512x768 that look like this:
Not bad for 512x768, right?
And all without using any Loras. Just the prompt and negative embeddings.
How to get the most out of a Low Resolution (512x768, 768x512, 512x512) Generation:
Use these negative embeddings made by sopenit494 and place in your negative prompt:
https://civitai.com/models/187118/quality-embeddings-for-furries
There are four files, if you are using a onsite generator that allows for negative embeddings, you need to select all four resources. If you are generating locally you already know what and how to do it.
Know what style you are going for, below is two good templates that can be added on to and tweaked for good results with Alpha Tango:
"real e621, photorealistic, masterpiece, extreme quality, high quality, hi res, (by thebigslick, by oouna: 0.6), by syuro, by spectrumshift, by eupharrow, by kame 3, by spaal, (by cutesexyrobutts, by hioshiru: 0.8), (by zackary911: 0.5),"
"uploaded on e621, trending on Artstation, masterpiece, anime style, extreme quality, high quality, hi res, (by thebigslick, by oouna: 0.6), by syuro, by spectrumshift, by eupharrow, by kame 3, by spaal, (by cutesexyrobutts, by hioshiru: 0.8), (by zackary911: 0.5),"
Use Sampler: DPM++ 2m (Karras)
CFG/Guidance Scale: 7.5
Steps: 25-50
Avoid Lora usage unless absolutely necessary. The model is very robust, it knows how to recall nearly every concept, you just need to know how to prompt it. Niche characters, specific highly uncommon fetishes/poses, vehicles, and weaponry are usually the only Loras that should be used.
Use complimentary poses and perspectives, do not prompt for details that are not realistically visible from the perspective used as this can cause gen inaccuracies and use up limited pixel space on pointless gen errors.
Use the correct resolution for the image being generated. Though not necessary, for best results you should use a resolution that conforms best to the character's pose/anatomy.
Use the right prompt for the right model. Not all models are created equal, prompts that work perfectly for one model most likely won't work well with another, completely different model. Knowing this and tweaking or making entirely new prompts will help to resolve any issue you may have with generation.
Pick the right amount of steps for the complexity of the image, and choose the right CFG/Guidance Scale for the prompt. For a baseline, 25-50 steps with a CFG/Guidance Scale of 7.5 will give great results, settings easily attainable on the majority of onsite image generators.
And that's all there is to it.
No Loras needed, no detail enhancers, no VAE's, no upscalers or refiners.
Just use the right negative embeddings, sampler, CFG, steps, prompt, and follow the guide above for high-quality output on whatever subject/species/character you are prompting, and soon you'll be gen'ing like a pro!
...
I'm not sure how to end this so here's some Eevee's for the road:
Have a good day!