I hope the following thoughts and information are of some use.
When I first started out, wanting to learn how to train a LoRA, I found it difficult to find any information about how to do it. There plenty of places to find out the technical side of it - where to download the software, how to install the software, how to run the software. But the actual thought process never really got a look-in, especially when it came to the tagging side of things.
That basically boiled down to "let the machine do it" and the problem with this for me was, I was using a base model trained on images from e621, and there really didn't seem to be a way to do that. So I had to do it by hand.
My first couple attempts were ham-handed at best. But I gradually started to figure something out, and I thought I would share those thoughts. So here they are:
Tagging a LoRA model is a lot like chiseling a statue out of a block of granite.
How do you carve a statue? By chiseling away everything that doesn't look like the statue.
During training, you present a series of images to the software, which it absorbs in their entirety. And it considers all that visual data to be the model. It's all saved together in one big lump - your block of granite.
The purpose of tagging, then, is to chisel concepts out of that lump and set them aside, basically telling the software "No! This is not part of that!"
Let's take an example from one of my own recent data sets:

Here we see Kneesaa The Ewok, standing on a wooden platform in the Ewok village.
The txt file tag document that goes with this image reads as follows:
kneesaa, ewok, full-length_portrait, standing, spread_arms, t-pose, smile, wood_floor, railing, tree, forest, outside,
Now let's note what isn't tagged.
No hair or fur color. No eye color. No clothing color... in fact, no character details at all.
Why? Because by NOT tagging those details, they remain associated with the core tag keyword.
What I have tagged out are:
Pose
Facial expression
Background details
...and nothing else. Well, except "ewok" because I happen to know that the base model I'm using, (YiffyMix_V32) already knows that tag, and has data associated with it.
Here's another example:

It's Takka-Too the cloud bear, and the tag txt document that goes with this image reads:
takkatoo, looking_up, standing, pointing, pink_clothing, dress, wood_floor, high-angle_view
And again, what have I tagged out? Pose, and background details. However, here I've also tagged out her pink clothing, specifically her dress. By tagging that out, it chisels that concept away from the main chunk of character data, and allows the clothing to be altered later via prompt tags during image generation:

Let's sum this up
Tagging out a LoRA involves keying out only the things you want to be able to change. So, details that are intrinsic to the design of a character should not be tagged out.
Tag everything except that.
Chisel away everything but the statue.
Now, I have only done this on character models so far, but I would imagine that the same is true for whatever kind of model you're making, be it a character, or prop or background set:
Chisel out the details you want to be able to change with keyword tags.
Leave the core concept untouched.
Before closing, I should note that using a base model like YiffyMix which was trained on e621 data comes with pros and cons. On the good side, I can use the website as a dictionary of tags that are available for me to use when tagging. On the negative side, I'm mostly stuck with just using those tags. (though I have noticed that a few which aren't listed there do still work, so clearly the data set encompases more than just e621 data)
In any event, rather than just letting the computer do it for you, editing your tags by hand should result in cleaner data, and give you more control over it.
Thanks for reading all this. I hope you find some value in it.