Type | |
Stats | 244 0 |
Reviews | (10) |
Published | Apr 1, 2025 |
Base Model | |
Hash | AutoV2 2BF163E7CD |
UPDATE 1 lacking real data;
Positive Prompt;
masterpiece, most aesthetic, very aesthetic,
real, realistic, real person,
score_9, score_8, score_7,
source_real,
<bigasp captions go here>
real, realistic, realistic background, real,
<bigasp extra caption stuff for solidifiers>
<grid, depiction, offset tags go here>
<omega character captions go here>
<background and setting tags here, year tags, objects, and everything else>
Negative Prompt;
ai generated, lowres,
disgusting, very displeasing, normal aesthetic,
(anime, anime style, source_anime, 2d, 2d \(artwork\):1.1),
film grain, low quality, 1990s, vhs filter,
bad edit, bad anatomy,
jpeg artifacts,
monochrome, greyscale,
3d, 3d \(artwork\),
TXT2IMG;
RES_2S -> BETA is good.
DPMPP_2S A -> BETA is good.
Euler A -> BETA is kinda bad.
Euler -> BETA is okay, surprisingly.
RES_2M doesn't work.
IMG2IMG;
Set 0.7 or lower for denoise, this is due to the fit being incomplete so far.
RES_2S -> BETA
EULER_A -> BETA
As these things refit the growing pains get a bit less pronounced and the high quality absurdres images cut through; which in this case is mostly 3d and anime.
I don't think I have the real data to fully refit this model in a capable way. I'll need to source more real data or use more of the Omega clip strength.
Heavy negative for anime helps the real cut through, but it's not enough.
If I can get access to the original BigAsp images, I can retag those directly and refit using a smaller percentage of the images.
100k real images isn't enough. I need at least a million probably, but tagging a million real images will take quite a while, so bare with this one for now while the 1 million image Omega dataset refit completes it's first epoch.
There's a large enough anime images to cause significant shift, even when using the BigAsp clips. We'll see what happens when the epoch completes tonight.
There will be ONE final version AKA Stage 3; currently training with the standard 1 mil image pack that seems so effective at fitting models.
That will be the BigAsp VPRED Solidifed, but for now it's just cooking.
https://huggingface.co/AbstractPhil/OMEGA-BIGASP/tree/main
If you have BigAsp2 and am tired of downloading checkpoints, but also know how to make merges go here; https://civitai.com/models/1421066
You can find the stage 1 and stage 2 loras here; grab the most recent and make the clip mergers yourself if you like; or simply merge BigAsp clips yourself and come up with a better mix than I did.
I did NOT train the clips, as my clips and the BigAsp clips are very different; so I opted to train the UNET using half of one of mine, and the rest BigAsp clips. Essentially this is heavily finetuned using BigAsp; with Omega Clip_L as a finetune controller.
Omega Clip_L has well over 100 million samples trained, so if there is an expert, that would be the grandmaster.
Yeah don't.... don't type things into this unless you really want to see them.
BigAsp2 is fucking wild.
This thing does not conform to any standard deviations.
It does not comply with standard finetune options.
It completely ignores finetune training at times.
Even simple finetune data can destroy LARGE amounts of what was trained into it.
Converting it to vpred involved training only noise timesteps with similar and yet divergent realistic data.
This was a semi-successful and very low cost conversion, which is pretty cool. Doesn't change the fact that the model is untamable though; which is why I merged it with the order bringer Omega.
50/50 (Balanced) Chef's choice is the 50/50 with Omega and BigAsp2. This has the best controllers, but if you want more wild you can go down the chain to the AspHeavy, or if you're feeling exceptionally masochistic go ahead and grab the full Refit before merger.
25/75 (OmegaHeavy) is pretty good for OmegaHeavy, as Omega is quite stable and capable of almost anything.
75/25 (AspHeavy) BigAsp is fun but also fairly untamed and has very bad capability at counting.
I'd say this model is FAR from a plain English expert; but it's a great VPRED conversion prototype showcasing the power of minimal training using Omega clips.
Disappointing as it was, I expected to use these BigAsp clips to teach plain English to Omega, and the opposite happened. Omega was lobotomized, so that training was halted and the BigAsp training was let complete instead.
A refitted and finetuned version of BigAsp2; repaired and refitted to VPRED for utilization.
Atop of the refit the V1 merge is;
50/50 OmegaV0001 clips and BigAsp Clips
Finetune trained with;
Stage 1-> 80,000 samples, middle timesteps trained
OmegaAsp CLIP_L_1 = BigAsp CLIP_L 50/50 OmegaSim CLIP_L
Frozen
BigAsp CLIP_G
Frozen
Stage 2 -> 200,000...??? samples, i lost track honestly
OmegaAsp CLIP_L_2 = OmegaAsp_CLIP_L_1 -> BigAsp CLIP_L 50/50 (75% BigAsp now)
Frozen
BigAsp CLIP_G
Frozen
Refitted with frozen clips using the Sim Omega 73 clip_l and clip_g.
This both introduced many safe elements that otherwise don't work in BigAsp, as well as destroyed many of the NSFW elements that completely ruined generations during the conversion.
The 3 versions are intended to be for those who enjoy the original or the new more.