Verified: 5 hours ago
SafeTensor
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
Vector Art stylized background
Draws a vector art stylized background.
Trigger Word: drawartbackground
Version 3 uses the same set of SVGs to create the dataset for version 2, but the SVGs have been rendered to PNGs at twice the resolution used to train version 2, and version 2 used images rendered to PNGs at twice the resolution used to train version 1. This is different than images that have been scaled down as that introduces artifacts and blurring in the images.
The Veson 1 dataset comprises images rendered from source SVG at a resolution of 256x256. The LORA was trained at the same resolution.
The Veson 2 dataset also consists of images rendered from source SVG at a resolution of 512x512. The LORA was trained at this resolution as well.
The Veson 3 dataset comprises images rendered from source SVG at a resolution of 1024x1024. The LORA was trained at this resolution.
Variations in the dataset between versions:
The version 3 dataset differs from the datasets used for versions 1 and 2, although it is quite similar and informed by the results of those versions. The changes include some improvements to framing.
To avoid distorting the image, I render the smaller dimension to the target dimension and then cut the image to a square frame. I do not use random center cuts; instead, I manually select the best framing. Therefore, it involves some choices to ensure good framing at the final dimensions. Since version 1 and 2 were rendered at the same time and then processed the same way at the same time, they are cut nearly identically. For version 3, over a year has passed, so I made fresh choices, but comparing them, visually, they are very similar; the changes are superficial. I also fixed a few minor visual issues with a couple of the SVGs and added some more dynamic lighting to a couple to improve the overall visual appeal of the dataset.
Variations in the dataset training versions:
Variations
Versions 1 and 2:
Trained on my local computer using the diffusers library (github.com/huggingface/diffusers using a python script I wrote, Pythons heavily based on the examples provided in the diffusers library and the documentation
were trained using the model from the HF git repository for flux.1-dev model at huggingface.co/black-forest-labs/FLUX.1-dev.
Due to the limitations of my local hardware, training version 1 took several hours, while training version 2 consumed over 30 hours and version 3 would just crash it if I had tried. Version 3 could not be trained locally without better hardware
Version 1 and 2 are only available for flux.1-dev with no plans for that to change.
Versions 3 :
Trained on CivitAI's On-Site LoRA Trainer. using its new ai-toolki
Right now, Version 3 is only for “flux.2 Klein 9B-base.” But if someone really wants it, I’m happy to train a flux.1-dev version. Just comment on the model asking for it and your will will be granted because it gives me a reason to do it and I am currently just looking for one... Otherwise, I won’t bother for right now because I’m not using flux.1-dev much, but I sometimes get a hankering for old models, so who knows? Maybe I’ll want this LORA then.
All images in all datasets used for all three versions were rendered from a human artist’s work, without bitmap training. The images were converted from SVG to PNG at the exact resolution used for training, ensuring that there are no up or down scaling artifacts, blurring, or JPEG compression artifacts in the training dataset.
Trigger Word: drawartbackground

