Not sure about differences between "native" LoRA training and Dreambooth. Both methods requires the same things and has the same settings.
Fine-tuning method is about training your LoRA into model and then extracting it (of course, if you don't want to see file being 2-7Gb in size) with add difference. The best way to do LoRAs, i believe.
I'm using different trainer (of course, based on Kohya) which called LoRA Easy Training Scripts, with its own interface. But you can only run it locally. Not in Colab. https://github.com/derrian-distro/LoRA_Easy_Training_Scripts It's really easy to use. Uses Dreambooth training method, as i know.