LoRA is maybe a little overfit to the dataset. At low model weights there can be diversion from the traits specified in the prompt, and at higher weights in certain cases the characters will change significantly.
However, this seems closer to the modern Komowata Haruka style. Maybe it has less older works in the training set. Seems like a good option if you don't mind prompt engineering to compensate or have a character LoRA to more easily enforce characteristics.