Type | |
Stats | 46 0 |
Reviews | (7) |
Published | May 10, 2025 |
Base Model | |
Training | Steps: 777,917 Epochs: 1 |
Usage Tips | Clip Skip: 2 |
Hash | AutoV2 792E768CD2 |
AstolfoXL (1EP)
Probably the first (and the only) individual Full Finetuning with multi-GPU.
LoKR works, but no thanks. 我不做人了,早苗!
Discord: "Good luck".
Specification
Base model: AstolfoMix-XL, version 255c
Tech report: ch06
Training metrics (tensorboard): HF
Dataset (images > latents): danbooru2024, e621_2024
Dataset (tags + captions): meta_lat.json
1 step = 16 images, 4x RTX 3090 24G.
778k steps for 1EP, 8.0 + 4.6 = 12.6M images
Tag + NLP caption with A1111 token trick
Trainer codes: The PR won't be merged
Train parameter: adamW8bit, UNET 1.5e-6, TE 1.2e-5, BS4 (4 GPU) grad accu 4, 71% UNET (Speed + must underfit)
75-100+ days for 1EP. Train 1 EP only. Save per 10k steps.
Train result and loss curve: Tensorboard in HF
Core concept: Unsupervised learning
Expectation: MID (100% no filter no quality tag)
or "reality"
How to use
Train LoRA / merge on top of this model. Compatability should still close to 215c. Realistic human content is still supported. "Trust me bro".
Artist tags may not work, but I did trained. Just dump your "NAI" prompts here.
Use TIPO to expand tag based prompts with NLP.
Short tags will suffer from background latent noise. Tags can be observed from E621 or danbooru.
All images are just seen once. There is no task or KPI to chase, or that omniscient state has been archived. The loss curve is as flat as it neither converge or diverge.
Full docuementation will be published, which is as long as the AstolfoMix series.