THE PRUNED MODEL IS PROTO-MIX THE FULL MODEL IS FINAL-MIX
and basically with the image examples
LEFT is Proto-Mix
RIGHT is Final-Mix
I initially wanted to upload all the based-mixes onto one page but alas it doesn't work that way here, make sure you look at the huggingface repo I created to host ALL Based-mixes I have made so far along with all future entries that will be uploaded here too. Pruned versions with the model toolkit extension will be uploaded there as well for all previous Based-mixes and future Based mixes.
Based65 proto and finalmix were made with the aid of the Supermerger extension
Finalmix also included the usage of the MBW extension
65 was made with using HLL3's final epoch and AOM2_hard as the base, however despite me throwing some random mixes I liked to stray further away from AOM2's hard look (that realistic rib look that I didn't really like which also showed up on Based64) while maintaining its NSFW capabilities "Based65-Final-Mix" despite it's great appearance seems to have a harder time getting precise LORA details compared to protomix and Based64, the reason for this? It's due to the fact that apparently within the models I mixed some already included finetuned models that I wasn't aware of which CAN affect LORA compatibility so for example if your LORA details have trouble showing up on Based65-Final-mix the reason for that is due to more than 2 finetuned models were included in the mixing recipe, going above 2 seems to make LORA compatibility wonky, so my plan with the future Based mixes is to ensure that I take a close look at the creation process of the random models I put into my mixes and ensure that they're just another basic Anime model mix so I have an easier time ensuring that myself and users using my mix can easily generate "close to accurate" details with their baked LORAs.
BE WARNED IF YOU PLAN ON USING 65-FINAL-MIX IN YOUR MERGES
Using proto-mix should be fine because I can only recall 2 "finetuned" models being used in the merge, I might just need to understand MBW a bit more to make LORA compatibility better with potential future Based-mixes using more models I'm unaware that are actually finetuned merges but for now I'll be keeping an eye out and putting more focus on the origins of the models I put within the mixing recipe for future models, which I didn't do for 64 and 65.
Yeah model merging takes time, well AI image stuff takes times once you decide to go beyond generating stuff so please support me on my socials or platforms where I post outputs from my models/LORAs or platforms where you can support me financially by donating/hiring me for services on the obvious platforms listed on this linktree, the money will be used to get bigger NVME SSDs that don't have garbage file caching so model merging doesn't have stupid moments where it's stuck and I have to reset the whole process.