Sign In

Yolkhead's Albums

14
61
5
Updated: Dec 23, 2024
base model
Verified:
SafeTensor
Type
Checkpoint Trained
Stats
61
Reviews
Published
Dec 23, 2024
Base Model
Flux.1 D
Training
Steps: 2,000
Epochs: 10
Usage Tips
Clip Skip: 1
Hash
AutoV2
9F8D9919DB
default creator card background decoration
Silver Tier Supporter Badge December 2024
sirrece's Avatar
sirrece
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

The plan here is to make finetunes of various models to coincide with my album releases.

I highly recommend using zer0int's finetunes of clip-L in conjunction with this, and really any, flux finetunes, as the performance uplift is frankly spectacular.

They can be downloaded here: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-BEST-smooth-GmP-TE-only-HF-format.safetensors

And here: https://huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors

There is a difference between the two of them, although which is better seems to be uncertain: its worth keeping both as you may find yourself having trouble with a particular prompt, and find that switching which clip you are using suddenly fixes the issue.

PINK CONCRETE

Listen to the album here: https://open.spotify.com/album/6mb2KnxcVOIKZBzEiq2Mdg?si=EIlFSDTfSfaFJglMPttk4g

Free resources for AI and ML (everything on the Patreon is 100% free): patreon.com/yolkhead

Music video for pink concrete: https://www.instagram.com/reel/DD4Ah0LObCe

This one was built off of a process I've used in the past in SDXL fine tuning, albeit more sophisticated here in that I needed to produce much higher quality images for my dataset in order to avoid damaging the model's unet in unintended ways. In general, the higher quality the model is, the more care the dataset used to train it requires, since any "decrease" in quality can subjectively harm qualities of its original composition.

This is an overall uplift. It doesn't do NSFW the way some of the flux finetunes do, but to be fair, no flux fine tune at the moment can touch SDXL on that front, so its a moot point. My primary concern with this model was to undo a lot of the safety training on base flux to improve the unet quality and overall adherence as a starting point for future finetuning (and it seems to have worked better than anticipated).