Sign In

Blessing Mix recipe (aka. Bracing Evo Mix Clone)

Blessing Mix recipe (aka. Bracing Evo Mix Clone)

There are two main ways to create a model. One is to create a training model, and the other is to merge multiple training/merged models to create another new model. Training a model is a very GPU-intensive and time-consuming process, so it's not possible for an individual to create a complete model. However, it has been shown that by merging the slightly incomplete training models created by each individual, it is possible to create a good quality merged model. While the task of creating a training model is difficult, the merging task has the advantage that the principle is simple, making it easier for individuals to work with. However, there are many different ways of merging, so creating a good merged model is not an easy task.

Many models have been created through the process of merging multiple models, and some merging models have published their recipes for creating them, and this publication has triggered the creation of many other merged models, and the merging recipes of various models have been published. Some merge models have published recipes, while others, like chilloutmix, have been widely loved by users, but their detailed recipes have not been published yet. Perhaps the most famous merge model with a fairly detailed recipe is https://huggingface.co/WarriorMama777/OrangeMixs OrangeMixs. This model documents in detail how and why it merges at the block levels.

However, many merged models do not publish their merging recipes, which can be problematic due to the licensing of each model used to merge the models. If the model used as the model source is a model that has been leaked in the past through unauthorized channels such as the Novel AI model, it may become a problem in the future. An example of this is Anything v3, which was trained on top of the Novel AI model. If the copyright holder of the Novel AI model claims their copyright in the future, all models that used it would be at risk of being trashed, or all models that used it or merged models that used it as a merging source would have the same issue. These models may be restricted from commercial use due to licensing issues.

How do we fix these issues? There have been several recent attempts to solve this licensing problem, including the OpenBra training model by @PleaseBanKai, author of BRA v5, and the most recent known attempt is a merged model called BracingEvoMix by https://twitter.com/sazyou_roukaku @sazyou_roukaku.

BracingEvoMix is based on @PleaseBanKai's OpenBra and merges several trained models without license problems (or with fewer issues).

Surprisingly, BracingEvoMix is very well made. I personally ran several simple tests and it was better than chilloutmix and better than BRA v5. Best of all, it's a face LoRA-friendly model, so it looks great with several face LoRAs. However, it was disappointing that the recipe of BracingEvoMix is not publicly available. However, @sazyou_roukaku disclosed the model used as the source of BracingEvoMix through the following documentation, and it says that he try to minimize the problem of commercial issues and some recipe hints.

https://huggingface.co/sazyou-roukaku/BracingEvoMix

Blessing Mix

Since the models used to make BracingEvoMix are all publicly available, I decided to make a clone to reproduce it based on the hints in this document. The recipe is not a perfect reproduction of the original, but I would like to share the recipe with the hope that you can use this imperfect recipe to create another merging model or improve it.

Tools

Source models

Please see also https://huggingface.co/sazyou-roukaku/BracingEvoMix

Merging Recipe

All notations are SuperMerger's notations.

Since I don't know the original optimal merge recipe, I performed the output_blocks and input blocks + middle_block separately, adjusting the weight to maximize the cosine similarity with BracingEvoMix while merging process repeatedly.

I made three candidates over the course of a few days, learned a few tricks along the way, and here's the selected recipe for the most expressive and similar model.

I had to iterate dozens of times to get each step right.

output_blocks

  • OpenBra + (Evt_V4_e04_ema - v1-5-pruned-emaonly) x 0.08 = TEMP1

    • Add Difference - Evt_V4_e04 used to make Asian faces (as stated in the doc.)

    • Cosine similarity of output_blocks between BracingEvoMix: 92.2178%

  • TEMP1+ (diamondcoalmix_diamondcoalv2pruned - v1-5-pruned-emaonly) x 0.1 = TEMP2

    • Add Difference - diamondcoalmix used to make Asian Faces.

    • Cosine similarity of output_blocks: 97.5561%

  • TEMP2 x 0.95 + diamondcoalmix_diamondcoalv2pruned x 0.05 = TEMP3

    • some skin overfitting observed -> Add weight sum to reduce overfitting

    • Cosine similarity of output_blocks: 97.5790%

  • TEMP3 x 0.9 + epicrealism_newAge x 0.1 = TEMP4

    • Cosine similarity of output_blocks: 97.6665%

  • TEMP4 + (sxd_10Pruned - v1-5-pruned-emaonly) x 0.18 = TEMP5

    • Add Difference - add sxd for NSFW

    • Cosine similarity of output_blocks: 97.9988%

  • TEMP5 + (OpenBra1.1 - v1-5-pruned-emaonly) x 0.06 = TEMP6

    • Add Difference - add OpenBraβ OpenBra1.1

    • Cosine similarity of output_blocks: 98.0669%

  • TEMP6 x (1-alpha) + dreamshaper_6NoVae x alpha = TEMP7

    • alpha = (0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.15,0.15,0.15,0.15,0.15,0.15)

    • OUT06~OUT11 only

    • Cosine similarity of output_blocks: 98.3657%

  • TEMP7 + (OpenBra1.1 - v1-5-pruned-emaonly) x alpha = FINAL_OUTPUTS

    • Add Diff: alpha = (0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.08,0.08,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

    • OUT03, OUT04 only

    • Cosine similarity of output_blocks: 98.5187%

Block level cosine similarity of the FINAL_OUTPUTS

|  OUT03   |  OUT04   |  OUT05   |  OUT06   |  OUT07   |  OUT08   |  OUT09   |  OUT10   |  OUT11   |
| 97.4805% | 97.5121% | 98.0665% | 98.9886% | 99.0232% | 96.8549% | 99.7340% | 99.7166% | 99.2922% |
Total: 98.5187%

input_blocks

  • OpenBraHalf+ (dreamshaper_6NoVae - v1-5-pruned-emaonly) x alpha = SAVE1

    • Add Diff: alpha =(0.0,0.95,0.95,0.55,0.55,0.55,0.85,0.85,1.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

    • OpenBraHalf = OpenBra x 0.5 + v1-5-pruned-emaonly x 0.5

    • Cosine similarity of input_blocks: 97.3165%

  • SAVE1 + (epicrealism_newEra - v1-5-pruned-emaonly) x alpha = SAVE2

    • Add Diff: alpha =(0.0,0.0,0.0,0.0,0.0,0.0,0.08,0.1,0.1,0.08,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

    • Cosine similarity of input_blocks: 97.5255%

  • SAVE2 x (1-alpha) + epicrealism_newEra x alpha = SAVE3

    • alpha =(0.0,0.0,0.0,0.0,0.0,0.0,0.05,0.05,0.05,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

    • only IN05,06 and IN07 blocks, 0.05 weight applied

    • Cosine similarity of input_blocks: 97.5408%

  • SAVE3 x (1-alpha) + Evt_V4_e04_ema x alpha = FINAL_INPUTS

    • alpha = (0.0,0.0,0.0,0.0,0.0,0.0,0.3,0.24,0.24,0.3,0.0,0.0,0.0,0.28,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

    • Cosine similarity of input_blocks: 98.6075%

Block level cosine similarity of the FINAL_INPUTS

|   IN01   |   IN02   |   IN04   |   IN05   |   IN07   |   IN08   |   IN09   |   MI00   |
| 99.7211% | 99.8581% | 99.1166% | 99.5738% | 96.9794% | 98.2844% | 99.6233% | 95.7031% |
Total: 98.6075%

Note: As you can see, the cosine similarity on input blocks is not calculated against all input_blocks. it omits IN00, IN03, IN06, IN10, and IN11 blocks. so the higher cosine similarity does not correctly represent the overall cosine similarity.

input_blocks + middle_block + output_blocks

BlessingMixV0 = FINAL_OUTPUT x (1-alpha) + FINAL_INPUTS x alpha (1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

Base encoder

In this case, I can't use cosine similarity, so compare it with the output of the original BracingEvoMix.

  • TRY1 = (dreamshaper_5PrunedNoVaeTrain x 0.5 +epicrealism_newAge x 0.5) x 0.5 + OpenBra x 0.5

    • only BASE block was used.

    • BASE block can be merged by the following method:

      • BASE_FIXED_MODEL = MODEL x (1-alpha) + FOR_BASE_MODEL x alpha (1,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0) - only BASE block was weight sum.

  • BlessingEncV0 = TRY1 x (1-alpha) + Evt_V4_e04_ema x alpha (0.15,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

    • add a bit Evt_V4_e04 for pose.

BlessingMix V1

BlessingMix V1 = BlessingMixV0 + BlessingEncV0 (BASE only)

  • BlessingMix V1 = BlessingMixV0 x (1-alpha) + BlessingEnc x alpha (1,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

  • replace the BASE encoder only.

Tips and Tricks

  • BlessingEncV0 base encoder is not perfectly matched but it could be compared with the original BracingEvoMix by replacing the Base encoder with BracingEvoMix's encoder.

    • BlessingMix V1-EvoEnc = BlessingMixV0 x (1-alpha) + BracingEvoMix_v1 x alpha (1,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0)

13

Comments