Sign In

【LoRA / FLUX.1 & Pony】L3n4's Beholder Vision

41
387
1.1k
15
Updated: Dec 8, 2024
styleanimewomangirls
Verified:
SafeTensor
Type
LoRA
Stats
160
253
112
Reviews
Published
Dec 4, 2024
Base Model
Pony
Training
Steps: 5,490
Epochs: 15
Usage Tips
Clip Skip: 2
Strength: 1
Hash
AutoV2
2AC63163AB
default creator card background decoration
Witch's Brew Pixel Badge
L3n4's Avatar
L3n4

Introduction to Beholder Vision

This is an initial test LoRA for Pony, derived from my Beholder Vision LyCORIS-LoCon for Illustrious-XL v0.1, trained using the foundational parameters outlined in my Crash Course in LoRA Training (On-Site Edition).

Nov 28, 2024
Beholder Vision began as an experimental LyCORIS-LoCon, trained on 126 meticulously curated images of aesthetic anime art.

The concept: create “eye-candy” results, with flexibility to adapt to a variety of creative needs.

Usage Recommendations

Pony Ver. Parameters (Recommended as of Dec. 6, 2024):

  • Strength: Set between 1.01.2 (based on extensive testing, I strongly recommend 1.05).

  • Sampling Method: DPM++ 2M SDE (Karras)

  • Steps: 25–35 steps.

  • CFG: 7.

Changelog and Updates

Dec 6, 2024
Version 3o is here!

Dec 4, 2024
Version 2.1o is now available! This marks a complete rework of the initial test version for Pony. It has been trained for 5,490 steps, making it competitive with the original Beholder Vision.

  • Despite its extensive improvements, the new LoRA is just 54.8MB—half the size of previous Pony iterations of the model and vastly enhanced in every aspect.

Dec 2, 2024
The first version of Beholder Vision for FLUX.1 [dev], 1.3o, has officially released!

Nov 30, 2024
I’m excited to announce Beholder Vision 1o (and its successor, 1.1o)!

  • 1o debuts as an experimental GLoRA, while 1.1o returns to classic LoCon, delivering improved performance at a smaller file size.

  • Now natively compatible with other SDXL models, including NoobAI-XL Epsilon-pred 1.0-Version.

This version represents a significant leap forward over the 0.9beta, thanks to an upgraded training dataset and finely tuned parameters. (Details on the dataset and parameters will be shared if there’s interest.)