Type | |
Stats | 67 |
Reviews | (16) |
Published | Dec 23, 2024 |
Base Model | |
Training | Steps: 2,000 Epochs: 10 |
Usage Tips | Clip Skip: 1 |
Hash | AutoV2 9F8D9919DB |
The plan here is to make finetunes of various models to coincide with my album releases.
I highly recommend using zer0int's finetunes of clip-L in conjunction with this, and really any, flux finetunes, as the performance uplift is frankly spectacular.
They can be downloaded here: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-BEST-smooth-GmP-TE-only-HF-format.safetensors
There is a difference between the two of them, although which is better seems to be uncertain: its worth keeping both as you may find yourself having trouble with a particular prompt, and find that switching which clip you are using suddenly fixes the issue.
PINK CONCRETE
Listen to the album here: https://open.spotify.com/album/6mb2KnxcVOIKZBzEiq2Mdg?si=EIlFSDTfSfaFJglMPttk4g
Free resources for AI and ML (everything on the Patreon is 100% free): patreon.com/yolkhead
Music video for pink concrete: https://www.instagram.com/reel/DD4Ah0LObCe
This one was built off of a process I've used in the past in SDXL fine tuning, albeit more sophisticated here in that I needed to produce much higher quality images for my dataset in order to avoid damaging the model's unet in unintended ways. In general, the higher quality the model is, the more care the dataset used to train it requires, since any "decrease" in quality can subjectively harm qualities of its original composition.
This is an overall uplift. It doesn't do NSFW the way some of the flux finetunes do, but to be fair, no flux fine tune at the moment can touch SDXL on that front, so its a moot point. My primary concern with this model was to undo a lot of the safety training on base flux to improve the unet quality and overall adherence as a starting point for future finetuning (and it seems to have worked better than anticipated).