Type | |
Stats | 26 0 |
Reviews | (3) |
Published | Jul 10, 2024 |
Base Model | |
Training | Steps: 7,500 Epochs: 150 |
Trigger Words | ohwx |
Hash | AutoV2 00E3004959 |
B-LORA sliced twice, Content (composition) and Style. I was bored AF and cooked some ComfyUI workflow, which has proven to be extremely efficient and flexible. I think there is no better way for style transfer. Separate Lora load for composition and style, samplers latents decoding vae and straight into IPAdapter Style & Composition SDXL, than sampling and (only if you megalomaniac like me) applying third Lora, which is trained separately with Dreambooth method (trigger + class). I will share ComfyUI workflow, make sure you know how to rename clip vision and ip adapter models, cause i use Unified loader for IP adapter.
Ofcourse, each model can be used separately and independently.
All captions used for training + comfyUI workflows here:
https://drive.google.com/drive/folders/1FFS4CnX3RI4B1yhzwqrlEwLd_QGpuYCZ?usp=sharing