santa hat
deerdeer nosedeer glow
Sign In

Easy LoRA Trainer

68
540
37
Updated: Sep 27, 2024
tool
Type
Workflows
Stats
540
Reviews
Published
Sep 11, 2024
Base Model
Flux.1 D
Hash
AutoV2
3C474B3BEA
default creator card background decoration
pgc's Avatar
pgc
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

A simple and quick lora trainer setup, I got 1it/s after 5min of training on a 512px training, batch 1, so it's pretty fast.

You can use flux1_devFP8Kijai11GB.safetensors as well as the regular flux1-dev.safetensors model,

flux1-dev is slightly faster but take abit more time to load, the VRAM usage is the same, around 16.3 GB of VRAM, set the HighVram to false doesn't change anything on this side.

Set the "Split Mode" to "true," will use a maxium of 41% of 24GB, so around 10GB of VRAM usage,

Use it only If you have less than 16GB, If you have 16GB you should try "false" before, and see if you got an OOM. Setting this to "true" and the training will last 110 minutes instead of 20 on this particular example.

I made 3 custom nodes that you will find on comfyUI manager (missing nodes)

To train on specific single blocks, separated by Layer1 and Layer2

Added validation previews and loss, and save the preview and training arguments into the destination folder

600 steps at 512 resolution, took me 15 min to train