Sign In

1990's Fantasy Oil Painting Art Style - 1mb LoRA

77
754
651
2
Verified:
SafeTensor
Type
LoRA
Stats
636
651
Reviews
Uploaded
Feb 9, 2023
Base Model
SD 1.5
Training
Epochs: 20
Trigger Words
kpartstyle
Hash
AutoV2
F7D328BF21
0
0
0
0

1990's Fantasy Oil Painting Art Style - LoRA 1mb - 9FOPAS (aka. kpartstyle)

Current version: First Release Feb 10 2023

Contents:

-- Model information

-- LoRA art style training guide (scroll far down)

Known issues: facial similarity, poor coherence/clarity for midground and background. So... Please help improve it or share similar models if you like this art style.

This LoRA was trained with art in the style of 1980's and 1990's oil painting fantasy art. If you are familiar with Dragon Magazine during that period, then this is inspired by that, ostensibly the late Keith Parkinson, who tragically died to leukemia at the age of 47 (RIP). You may know Keith's incredible work from the original EverQuest box art, and also it's early expansions.

There are plenty of 'digital art, trending on artstation, greg rutkowski' models. I like the old hand painted style myself, so I've been trying to make a model to do that. And, this is the best I've come up with so far.

FACIAL SIMILARITY (CURRENT ISSUE)

The model is currently not very good due to facial similarity between images. That is also somewhat due to my preferred models, such as chilloutmix, which have a common aesthetic across all the images. Regardless of model, I've found the facial similarity is very strong at the later epochs, so try the earlier ones perhaps.

I recommend the 000020 version, but I've made them all available as I consider this a beta and want your help to improve it. I'm still testing all the epochs and prompt weighting iterations myself.

Workarounds:

  1. If you recognize a celebrity with a similar face, then negative prompt against it. For example, I have had some success with 'lucy lawless' and 'elvira' and 'cher' as negative prompts.

  2. Use the batch face swap extension for Automatic1111's webui on all of the images to change all the faces in the image conveniently.

INSTALL AND KEYWORDS

  1. Put it in your webui/models/Lora folder.

  2. Adjust the prompt :0.X weight you want to try. 0.7 might not work well, so please experiment.

  3. Optional keyword: kpartstyle

  4. Additional recommend prompts: paint dithering, faded oils on canvas, rough canvas, fantasy art

MODELS AND VAE

I strongly recommend using this as your VAE: vae-ft-mse-840000-ema-pruned.ckpt

Recommended models are... Anything really. I use GTM models (v2 seems good for this due to its less vibrant look) and Chilloutmix myself mostly, but try with anything and let me know your results. Experimentation is part of the fun.

TRAINING

The training log file is included further down below, which includes all the parameters.

In addition to what is shown there, I should mention that I used 62 images as the training set, which included a variety of different scenes and characters.

I also used a regulation set, because I was trying to avoid facial similarity and also it had worked for a previous private art style model I had made with Dreambooth. The regulation set content was 2,000 images generated with SD1.5 with the only prompt used being 'artwork style'.

I did not use any VAE for the training. I just used SDv1-5-pruned-emaonly.ckpt as the training model and that's it.

FURTHER TESTING, REDOES AND IMPROVEMENT

As of early February 2023, I have yet to test it with different sets of regulation images. I recommend generating 2,000 images made with the prompts, 'artwork style' (which I used), 'artwork', 'art style', 'art', 'painting style', 'painting' and also a set that is 1,000 'woman' and 1,000 'man' in the same directory.

Also, the file size is very low. Maybe it should be higher next time.

I think good classification images might be the key to breaking the facial similarity problem, but that's only a largely untested hunch currently.

Thanks to LuisaP for sharing the info about how to make a LoRA with low file size.

If you wish to be... Inspired by similar art, search for Keith Parkinson, Larry Elmore and Dragon Magazine. Also, consider picking up their artbooks and other prints and such... Really great stuff if you're into fantasy, especially as gifts to other fantasy nerds.

TRAINING PARAMETERS LOG FILE

{

"base_model": "D:/SD/models/Stable-diffusion/SDv1-5-pruned-emaonly.ckpt",

"img_folder": "D:/SD/training/trainingSets/KpLeArtStyleLuisaPmethod/image",

"output_folder": "D:/SD/training/trainingSets/KpLeArtStyleLuisaPmethod/model",

"change_output_name": "kpartstyle",

"save_json_folder": "D:/SD/training/trainingSets/KpLeArtStyleLuisaPmethod/log",

"load_json_path": null,

"json_load_skip_list": [

"base_model",

"img_folder",

"output_folder",

"save_json_folder",

"reg_img_folder",

"lora_model_for_resume",

"change_output_name",

"training_comment",

"json_load_skip_list"

],

"net_dim": 1,

"alpha": 1.0,

"scheduler": "cosine_with_restarts",

"cosine_restarts": 12,

"scheduler_power": 1,

"warmup_lr_ratio": null,

"learning_rate": 0.001,

"text_encoder_lr": null,

"unet_lr": null,

"num_workers": 1,

"persistent_workers": true,

"batch_size": 2,

"num_epochs": 40,

"save_at_n_epochs": 4,

"shuffle_captions": false,

"keep_tokens": null,

"max_steps": null,

"train_resolution": 512,

"min_bucket_resolution": 320,

"max_bucket_resolution": 960,

"lora_model_for_resume": null,

"save_state": false,

"load_previous_save_state": null,

"training_comment": null,

"unet_only": true,

"text_only": false,

"reg_img_folder": "D:/SD/training/regulation-sets/regulation-artwork-style",

"clip_skip": 2,

"test_seed": 23,

"prior_loss_weight": 1,

"gradient_checkpointing": false,

"gradient_acc_steps": null,

"mixed_precision": "fp16",

"save_precision": "fp16",

"save_as": "safetensors",

"caption_extension": ".txt",

"max_clip_token_length": 150,

"buckets": true,

"xformers": true,

"use_8bit_adam": true,

"cache_latents": true,

"color_aug": false,

"flip_aug": false,

"vae": null,

"no_meta": false,

"log_dir": null

}

If you have any good insights about this tech we are all still experimenting with, please share them in the comments. I haven't read any papers and such so some settings I used might not do what I think they do.

😩

And, yes... This is the product of many sleep-deprived nights of trial and error. Even now, I'm not satisfied with it primarily due to the facial similarity at higher epochs and higher 0.X iterations.

I ask for credit just so there is higher chance of centralization of whatever community might emerge around it and the model receiving help to improve.

Putting this here until we have a Guides and Tutorials section on this website.

LoRA !___ ART STYLE ___! Training Guide

THIS GUIDE, AND SUBSEQUENTS IMPROVEMENTS BASED ON FEEDBACK I RECEIVE AND MY OWN EXPERIMENTS WILL SPECIFICALLY BE FOCUSED ON _____ ! ! ART STYLES ONLY !! ____.

Go elsewhere if you want to ask questions about how to train a character/person. I don't care about training characters, there are enough tutorials about that already. I will not help you at all about that.

This might not be optimal, but nobody else has done it so let me try.

REQUIREMENTS

To follow this guide to the letter, you will need the following.

  1. Windows

  2. Xformers and fp16 capable GPU

  3. Careful reading

  4. Patience

  5. Ability to troubleshoot without my help

  6. The ability to read what I said above and to not ask questions about training characters/people. This is a guide for STYLE.

  7. You have contempt for using Google Colab and won't ask 'How can I do this on Google Colab?'

  8. You won't ask a question like 'How to do this on 6gb card?!' because maybe you can but I dont care about helping you with that. HOWEVER, if you discover how to do it on a low vram card, and then share the method, then its very welcome to share it.

INSTALLATION and INITIAL SETUP

  1. LoRA Repos
    https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
    Run
    the two installers .bat files. They will create install to folders in the directory. Install it on your C: drive or it probably won't work. Also, it's Windows only. If you have Linux you probably know what you're doing anyway.

  2. Base Model
    Download a model to use as a base. I use https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors
    You can also use v1-5-pruned.safetensors which is supposedly better for finetuning/training.

    I recommend making a new folder called training, and putting a copy of this file in here. It'll make the file selection during the training setup more convenient.

  3. Directories Setup
    Create a new folder inside /training with a name for your project. /training/projectname
    3a. Inside /training/projectname, create three folders. /image, /log, /model
    3b. Inside the /image folder, create a new folder called /10_projectname. 10 is the number of times each image will be trained per epoch. 10 seems good, unless your training image set is very large, then you might just try 5.

TRAINING

After completing the install and setting up the folders/directories, do the following.

  1. Class regulation images. Optional: In the /training folder, make a folder called /regulation. Generate 1,000 to 2,000 class regulation images, using the SD 1.5 model, with the prompt 'art style' or 'artwork style' or 'illustration style' or 'painting' or 'painting style' or 'art' or something similar to those, depending on what kind of style it is.
    1a. Save/move those regulation images into a directory structure of /training/regulation/art-style (or whatever prompt you used to generate them, hyphenated or not. Just my personal preference to hyphenate).

  2. Prepare your training set. You can use Irfanview to batch process them, including their filename. I recommend making all of them have the same tag at the beginning, in order to make it so that an additional keyword prompt is effective. For example, if you're training with Dark Souls 3 screenshots, put 'dark souls 3' at the start of the filenames, along with other tags that generally describe the style. 'dark souls 3, art style, video game screenshot, 3d game' or similar.
    2a. Put the training set images into the 10_projectname folder. Note: Make sure the total number of images is an even number.

  3. Start the training wizard through /sd-scripts/run_popup.bat. This directory and file were installed by the scripts we ran back in Installation step 1.

  4. My recommended parameters:
    load a json config file: no

    base model: SDv1-5-pruned-emaonly.safetensors / v1-5-pruned.safetensors (RunwayML SD 1.5)

    image folder: /training/projectname/image (do not choose the 10_projectname folder!)

    output folder: /training/projectname/model
    save log? yes: /training/projectname/log
    regulation images? yes/no: /training/regulation/art-style
    continue from earlier version? no
    batch size: 1 for low vram, 2 for high vram
    number of epochs: 40

    dim: 1, 2, 4, 8, 16, 32, 64, 128 (choose anything. the higher the number, the higher the filesize. I recommend trying a low number, even 1 works well, and the filesize will only be 1mb. My model published here only uses 1 and the results are okay)

    alpha: 1
    resolution: 512 / 768, depending on your training set. I just use the standard 512
    learning rate: 1e-3, 1e-4, 1e-5, 5e-4, etc. (I recommend trying 1e-3 which is 0.001, it's quick and works fine. 5e-4 is 0.0005)

    text encoder learning rate: choose none if you don't want to try the text encoder, or same as your learning rate, or lower than learning rate.

    unet learning rate: choose same as the learning rate above (1e-3 recommended)

    scheduler: cosine with restarts

    cosine restarts: 12
    save epochs? yes

    how often to save? 2 or 4 - recommended so you can experiment with all the epochs to find which is best
    shuffle captions? no
    keep some tokens at the front? no

    warmup ratio? no

    change output name: projectname

    meta comment: include the main keyword of the filenames, but it doesn't have any effect on training, just a nice thing in case people want to know a good keyword prompt to use in addition to invoking the LoRA.
    NOTE: Train Unet / Train Text encoder: This is the part where you can split the LoRA into either a Unet encode or Text encode only, otherwise it will train both. It's arguably best to train both at the same time, and text encoder is good if your captions are specific. I would recommend trying Unet only first.

    So, to train Unet only, do this...
    train unet only? true (this will only train the unet, and will reduce vram usage. Choosing false will allow you in the next step to )

    train text encoder only: false (if you have chosen not to train the unet only, this option will be available, and you should choose false to train both unet and text encoder, which might provide better results, but I haven't noticed any difference in quality worth mentioning myself)

    To train text encoder only, do this...
    train unet only? false

    train text encoder only: true

    To train both Unet and text encoders, do this...
    train unet only? false
    train text encoder only: false

    queue another training? choose yes if you want to run another training right after the other. I would recommend doing this if you want a more convenient way to train with different options, such as increasing dim, training both unet and text, etc. etc. so you can leave them all to train while you do other stuff such as sleep.

Please give it a try, and post your results here and publish any models that were generally successful.

If you have any issues, make sure you haven't just skimmed these instructions and have actually read them carefully. I mainly want to focus on improving the training parameters for people who are already up and running.