This proof-of-concept LoRA is based on one image Flux LoRA training. The method is described in more detail in Detailed Flux Training Guide: Dataset Preparation. The LoRA was trained with only one high-resolution image of The Starry Night by Vincent van Gogh. The single image was cropped, flipped, and rotated in various ways to create a dataset.
Version 1.0
Verson 1.0 was trained without captions (only a trigger). Use the trigger word "starry_night1" with the LoRA at full strength (1). It turned out pretty good for a proof of concept and only took 390 steps. Like most captionless LoRAs with limited dataset diversity, it has trouble generalizing, especially with complex prompts -- but with simple instructions, it produces some great images.
I'll include all the training settings in the metadata and you can download the dataset, but as a summary:
30 images (all crops of The Starry Night), x2 repeats, trained at 1024, batch size of 2
The LoRA converged near 390 steps at epoch 13.
The learning rate was set at 0.0006, with AdamW8bit (weight_decay=0.01,eps=1e-08,betas=(0.9, 0.999)), and used cosine_with_restarts as the scheduler.