Just found out an interesting information regarding LoRA trainings of SDXL.
Looks like higher Network Rank (Dimension) LoRA training is cooking the base model. Thus reducing the environment quality.
Higher rank is yielding better subject quality but environment quality looks like reduced a lot. Look at the tree details.
Here same settings image.
Above one is 256 Rank
Below one is 32 Rank
Everything is same during training. Epochs steps etc.
Tutorial link :