Sign In

B-LORA We Happy Few Content/Style

3
57
0
1
Updated: Jul 10, 2024
style
Verified:
SafeTensor
Type
LoRA
Stats
26
0
Reviews
Published
Jul 10, 2024
Base Model
SDXL 1.0
Training
Steps: 7,500
Epochs: 150
Trigger Words
ohwx
Hash
AutoV2
00E3004959
default creator card background decoration
LuxMint's Avatar
LuxMint

B-LORA sliced twice, Content (composition) and Style. I was bored AF and cooked some ComfyUI workflow, which has proven to be extremely efficient and flexible. I think there is no better way for style transfer. Separate Lora load for composition and style, samplers latents decoding vae and straight into IPAdapter Style & Composition SDXL, than sampling and (only if you megalomaniac like me) applying third Lora, which is trained separately with Dreambooth method (trigger + class). I will share ComfyUI workflow, make sure you know how to rename clip vision and ip adapter models, cause i use Unified loader for IP adapter.
Ofcourse, each model can be used separately and independently.

All captions used for training + comfyUI workflows here:
https://drive.google.com/drive/folders/1FFS4CnX3RI4B1yhzwqrlEwLd_QGpuYCZ?usp=sharing