Model Description
A conversion to rectified flow of the Chenkin 0.2
For main model description please refer to this model repo.
RF allows this model to get away from greyness of the base EPS solutions, provides vivid colors and unlocks better lighting adherence, like very dark or contrasty scenes, while not requiring training-time tricks like offset noise.
It also allows to sustain high stability at wide range of CFG, while not suffering from common downfalls of other base models:
Developed by: Cabal Research (Bluvoll, Anzhc)
Compute provided by: Chenkin, Heathcliff
License: fair-ai-public-license-1.0-sd
Finetuned from model: ChenkinNoob-XL-V0.2
Bias and Limitations
Standard biases and limitations of Danbooru dataset apply.
Community Guide
Basic and standalone getting started guide.
Reserved for Chenkin and nian__gao233
Getting Started Guide
Recommendations
Inference
Comfy
(Workflow is available alongside model in repo)
Same as your normal inference, but with addition of SD3 sampling node, as this model is Flow-based.
Recommended Parameters:
Sampler: Euler, DPM++ SDE, etc.
Steps: 20-28
CFG: 3-6
Shift: 3-8
Schedule: Normal/Simple/SGM Uniform/Beta Positive Quality Tags: masterpiece, best quality, aesthetic
Negative Tags: worst quality, normal quality, bad anatomy, low resolution
A1111 WebUI
(All screenshots are repeating our other RF release, as there is no difference in setup)
Recommended WebUI: ReForge - has native support for Flow models, and we've PR'd our native support for Flux2vae-based SDXL modification.
How to use in ReForge:
(ignore Sigma max field at the top, this is not used in RF)
Support for RF in ReForge is being implemented through a built-in extension:
Set parameters to that, and you're good to go.
Recommended Parameters:
Sampler: Euler Comfy, Euler, DPM++ SDE Comfy, etc. ALL VARIANTS MUST BE RF OR COMFY, IF AVAILABLE. In ComfyUI routing is automatic, but not in the case of WebUI.
Steps: 20-28
CFG: 3-6
Shift: 3-8
Schedule: Normal/Simple/SGM Uniform/Beta Positive Quality Tags: masterpiece, best quality, aesthetic
Negative Tags: worst quality, normal quality, bad anatomy, low resolution
ADETAILER FIX FOR RF: By default, Adetailer discards Advanced Model Sampling extension, which breaks RF. You need to add AMS to this part of settings:
Add: advanced_model_sampling_script,advanced_model_sampling_script_backported to there.
If that does not work, go into adetailer extension, find args.py, open it, replace builtinscripts like this:
Here is a copypaste for easy copy:
_builtin_script = (
"advanced_model_sampling_script",
"advanced_model_sampling_script_backported",
"hypertile_script",
"soft_inpainting",
)
Or use this fork of Adetailer - https://github.com/Anzhc/aadetailer-reforge
Training
Training Details
Samples seen(unbatched steps): ~47 million samples seen
Learning Rate: 2e-5 Effective Batch size: 1376 Precision: Mixed BF16
Optimizer: AdamW8bit with Kahan Summation
Weight Decay: 0.01
Schedule: Constant with warmup
Timestep Sampling Strategy: Complicated, first 2 epochs are "Logit Normal", epoch 3 onwards is "Uniform" SD3 Shift: 2 Text Encoders: Frozen
Keep Token: False
Tag Dropout: 10%
Uncond Dropout: 10%
Shuffle: True
Additional Features used: Protected Tags, Cosine Optimal Transport.
Training Data
4 full and 1 partial epochs of extended Danbooru dataset(~10m).
LoRA Training
Pochi.toml is a basic TOML for usage with https://github.com/67372a/LoRA_Easy_Training_Scripts/tree/refresh MAKE SURE TO USE BRANCH REFRESH, comes ready to work.
Hardware
Model was trained on 8xH20 node.
Software
Custom fork of SD-Scripts(maintained by Bluvoll)
Acknowledgements
Testers
Everyone in server who tested model throughout it's training and provided feedback, included but not limited to:
Shinku
yoinked
low channel
Anzhc
lylogummy
Silvelter
brittle
Darren Laurie
L_A_X
Nebulae
Francisco
WANG
youhuang
ztxzhy
Drac
user
nian__gao233
DUO
Kai Wong
Requiredforsomereason
spawner
peoscrha
waww
itterative
Nama M
Talan
Magpie
BKM Desu
花火流光
tairitsujiang
123
2222k
spawner
青苇
Showcase Images
Drac
Talan
Yoinked
Silvelter
Itterative
Hardware
Chenkin and Heathcliff for providing compute.










