Abstract
The article addresses the issue of severe skin artifacts appearing in images generated by a second iteration of a trained Lora. These artifacts, initially suspected to be due to overtraining freckles, were identified as aliasing effects caused by the base model's strong ability to render skin details, leading to overtraining of the skin within the Lora.
The solution involves a two-step approach:
Denoising training images for the initial "face and upper body" Lora to remove fine skin details.
Using training data with minimal exposed skin for the final "full-body" Lora, allowing the base model to extrapolate physique without overtraining on skin details.
What's the issue?
Note: this article will assume that you are familiar with the workflow I have described in my tutorial Simple SDXL Consistent Character Generation using just Forge and the Civitai Trainer
While trying to find a solution to the eye color and freckles problem with my character Lora tutorial, I experienced some severe skin artifacts appearing when generating images with the second iteration of my trained Lora. As I had trained the second iteration with a dataset containing freckles, the skin in close-up images looked almost reptile-like, while it was partially distorted in full-body images.
At first, I thought this might have been caused by an excessive amount of freckles in my training images (therby overtraining the freckles) and prepared another training dataset without the freckles. However, the Lora trained on the freckle-less dataset still created artifacts that greatly impacted the quality of generated images. This time, it looked as if stretch marks had been scattered all over the body:

I decided to subsequently check the 20 trained epochs and figured out, that those artifacts started to appear as soon as the likeness of my character got reasonably stable. Not really knowing what was going on, I tried to fight those artifacts by prompt engineering, to no avail. Only then I realized that the artifacts looked a lot like something that might have been caused by aliasing (much like the moiré pattern you can see on TV when someone is wearing a checkered shirt), and upon further thought, this actually made sense to me - as my base model seems to be really good at rendering skin details, it would of course include those details into the training images. Upon training, it would re-learn those skin-details, resulting in overtraining of the skin.
This effect shoud apply to all skin types that can be generated in very high detail by the base model, not just freckled skin, which is probably why I didn't run into these issues with my first artificial character Lora - Ashley has a relatively dark skin, and most base models aren't very good at detailing darker skin types.
In signal processing, if you cannot avoid aliasing effects in a different way (e.g. by increasing the sampling rate), you'd usually run a low-pass filter on the signal, which leads us to the first step of the solution to this issue.
Solution: Step 1
Remember, in my consistent character workflow, I'm iteratively training two Loras - first, a lora that will generate the face and upper body, and then, using the first one to generate the training data set, another one which can generate consistent full body images. In order to get rid of the skin details in the training dataset for the face lora of iteration 1, I ran all training images through a denoiser (which basically is a low-pass filter) using a strength that would just get rid of all skin details, making the skin look quite waxy.

Original image

Denoised image
After training another face lora on the filtered dataset and using this to generate another dataset for the final full-body lora, I found that the rendition of the skin quality in the face had greatly improved, however, the body was still showing artifacts.
After giving the entire thought a few more thoughts, I came up with part two of the solution.
Solution: Step 2
After a while, I was realizing that I had been training the model with too much skin, as I had included a few training images in underwear, beachwear and also some nudes. This obviously gave the base model wrong idea about skin details (possibly also leading it to interpret the body freckles as dirt), so I decided to create a new training dataset for the second Lora iteration which included as little skin as possible - basically, I made the character wear uptight clothes with long sleeves. Not to worry though, your base model will still be able to extrapolate the physique (body fat, muscularity, breast size etc.) from the clothed training images if you prompt for tight clothes when generating your training dataset. A few suggestions are:
tight t-shirt, long sleeves, turtleneck, catsuit, long tight dress, bodysuitAlso, your model won't lose NSFW capability when only training on clothed pictures, as rendering of naked body anatomy is primarily a feature of the base model you're using, not the character Lora itself. However, you will of course have to prompt for special body features that are only visible when nude, such as navel piercings, pubic hair and so on when generating the actual images with your character. I'd say that's a small price to pay for the greatly improved skin quality that you can achieve with this method.
You can see the final result of this endeavour here: I called her Bethany, and she's ready to be generated!

.jpeg)
.jpeg)