Generating popular characters is almost never a problem. In positive hints, in addition to the description of the appearance, the name of this character is given. However, this does not work with unpopular characters. Using LORA is also problematic, since there are very few arts for this character and they differ greatly in styles. In general, the appearance almost meets my expectations, the face of the character turns out to be extremely unlike the original. I use AbyssOrangeMix2 and similar models to generate, but I can't get the expected result. Perhaps someone can help me with advice.
You can try training a textual inversion. Those can work with just 3 consistent images. If you plan on using only a single model for it, then train on that model. If you want try to create one that will work on most models, train on one of the official stable diffusion releases.
It works well as long as what you're training can be represented by the model you're using. As for the number of vectors, consider each vector as a prompt token. You can technically describe anything with a single token, but more than one allows for a larger variety of details to be included. The harder part is having good training prompts, I'd say try to find a prompt for each of the images that give a similar result, but not the character you're training, then use that as the .txt of that training photo.
Also, run a training image through PEZ it will give you a prompt you can test on your model and also give you an idea of the max number of tokens you'd need. There are some better clip interrogators for AUTO1111 you can use as well.