santa hat
deerdeer nosedeer glow
Sign In

The Philosopher's Stone: Make the Checkpoint of your Desire in 5 Minutes.


The title of the article is quite ridiculous, but it comes from the fact that the most downloaded models on this site were created using the philosopher's stone, and today I plan to show it to you, dear reader!

The process is as simple as alchemy. You take X substances and mix them with Y substances in such a way that you obtain Z substance. This way, you can create the model you desire, remembering and using the algebraic properties of the tensors that create our substances.

Given the introduction, let's move on to a more practical explanation. Have you ever wondered how checkpoints are made? How do they improve those checkpoints? How is such a miraculous feat achieved that will make me the most renowned alchemist in Babylon?

Well, there are many answers, and just like in alchemy, there are more rumors and false information than correct paths.

"Did you just say our principles are false? How dare you say such blasphemy, you pagan and liar!"

Well, the truth is they are not false, the more accurate word would be: costly. The accessible knowledge, books, and formulas are the most expensive and inaccessible. Basically, the alchemists tell you:

"Hey, young man, do you want to make gold? Here's the formula, but remember, without this laboratory that cost me 100 camels, it will be impossible for you to do it!"

So, the common people kill their illusion as soon as they find out about the madness. Clearly, nobody is willing to pay so many camels just to try and fulfill their dream, potentially failing in the process.

That's the trap. The truth is, most alchemists who boast of having the best laboratory on Earth are actually nothing more than liars whose real laboratory is a room with a small philosopher's stone that gives them what they ask for.

And today, my friends, I'm here to open your eyes to the new world, the world of alchemy!

Basically, the process goes as follows:

"Do you want to create an SD model? Then have a $50,000 server."

"But of course, I don't have that kind of money!"

"Well, do you want to edit the existing model? Then go and download 50,000 images and process them in Colab for a whole month, or simply buy a $2,000 computer."

"But of course, I don't have the time or the desire to pay that amount!"

"Then, I'm sorry, it's impossible for you to enter the world of alchemists."

"But it's my dream, sir!"

So, it's your dream, well, let me give you my magnum opus. Here is the secret to making a checkpoint with free Colab, young lad:

And magnum opus was quite long, but basically it all came down to the following sentence: use Loras to affect the tensors of the base checkpoint, and thus you can make your model in 10 minutes (the time it takes to make a Lora).

"But that's not making a checkpoint, you liar!"

One could argue that it isn't, however, nobody here actually makes checkpoints since nobody has a $50,000 server dedicated to that. Therefore, we could say that what alchemists nowadays call making a checkpoint is simply editing the original or editing the edition of the revision of the copy of the third manuscript of the original. So, since nobody trains models from scratch in this community, we can say that fine-tuning or "editing" a checkpoint is making a checkpoint, or at least that's what popular language has led us to say.

Therefore, if making a checkpoint is editing it, and editing it means modifying the tensors to achieve a result, then all the alchemy of Babylon can be summed up in finding the cheapest way to modify tensors. And there lies the philosopher's stone, that is, the Loras.

Imagining checkpoints as the X where we want to reach and the treasure map as the base model, then there are two ways to draw that X on the base model:

One is to write it, and for that, we have to retrain the model to make it write the exact location of the X on the map. However, that is disgustingly computationally expensive since we have to open the map, understand it, comprehend it, and then go with a ruler looking for where the treasure X is based on the scale of the map being used. This is what is known as traditional fine-tuning, something hardly used by the alchemists of Babylon.

The other option is simpler. We take a piece of blank paper and write an X on it. That is, we make a Lora. Then we take that piece of paper and our philosopher's stone, delicately, we open the treasure map. After a violent slap, we embed the treasure X on top of the map and start hitting it with the philosopher's stone until the X is stuck to the map, and voilà! We have the treasure map with X.

"But liar, if you do that, you will only have a useless map with a useless X, whose meaning is a vile lie! There is no treasure beneath that X!"

Therein lies the secret of the alchemists of Babylon. We don't sell the treasure, we only sell the treasure map!

"Ah! Your own words betray you! See how you admit it! You're a liar, a despicable rat, a trickster!"

It may be so... or it may not, since I sold the treasure map. It's up to each customer to see if they manage to find the treasure with it.

"Of course, they will never find it! If it's a useless map you gave, you liar!"

Ah, there lies the second secret of the alchemists of Babylon: never reveal what treasure the map you sell hides! This way, on the journey towards our explorer's X, the first shiny stone that comes across in the area where the X is located will be the precious treasure that the map indicated.



To practically do this, what we use is a script from Kyoo, basically because it's the easiest to use, although it requires a little bit of programming knowledge. However, the guide will be in images, so just follow what the images show in the following process:

  1. Download a Base Model that meets these conditions:

    • It's a Checkpoint.

    • It contains VAE (If not, there may be an error, something like "Head is too Large," so you'll need to test it).

How do I get the links? (Left-click --> Copy link --> Paste in the URL)

2-Download the dependencies and the Loras. Each Lora will act as an influence point of where we want to go, that is, the X on the map. For example, a pixel art style, a watercolor style, an anime style, a realistic style, a style of little clowns playing the guitar while singing.:

The format is:

!wget [LINK] -O [File Name].safetensors (You don't need to put the actual link, it's invented.)

In this case, you have up to 4 pre-made Loras, you just need to change the link of the Lora to the one you like. If you want to use fewer Loras, you can delete the line, and if you want more, you just have to copy and paste and add the line, making sure to change the file name to avoid overwriting the models.

Apply the script. To do this, you just need to change the Loras based on the actual number you downloaded. The checkpoint is created automatically. There are instructions, so feel free to ask if you have any questions. In this case, 4 Loras are used, so there are 4 lines where the files are saved and 4 ratios.

#To add a Lora, you need to follow the following rule
#Add a NUMBER in the --ratios FOR EACH LORA ==> 5 LORAS, 5 RATIOS NUMBERS ==> --ratios 1 1 1 1 1
#Add a download folder as /content/[File Name].safetensors "" <-- This indicates that the code continues on the next line. If you leave a blank space and don't include this, it will give an error.

Make sure there is a space after the \ at the end of each line to properly separate the models --> This can be quite picky, but it's important.

If you don't connect to a GPU, you will encounter this error and won't be able to proceed.

---The Lora is saved in one of the program's folders.


Merge of Loras

Common errors: For the Lora function, Loras of identical dimensions must be used. In general, all Loras have the same dimensions, so it shouldn't be too difficult. Here, I'll help you identify which one is causing the issue so you can remove or replace it.

When the model is running, it is saved in the following way:

And after a few seconds, it will appear as follows (It takes time to update):


Since the checkpoints are very heavy, it is preferable not to continue consuming session with slow downloads and instead upload them to Hugging Face.

The only complicated part is obtaining the token and remembering that the default format comes in read-only mode, but we want the writable format.



Financial assistance: Hello everyone!

This is Tomas Agilar speaking, and I'm thrilled to have the opportunity to share my work and passion with all of you. If you enjoy what I do and would like to support me, there are a few ways you can do so:


Open in Gato

Open in Colab

Here is the description of each argument added to the ArgumentParser in the setup_parser() function:

--v2: Optional argument that indicates whether to load a stable diffusion model from version 2.x. If this argument is provided in the command line, it will be set to True. This argument does not require an additional value.

--save_precision: Optional argument that specifies the precision for saving merged models. Valid options are None (no specific precision), "float", "fp16", and "bf16". The default value is None. To use this argument, an additional value must be provided after specifying the argument in the command line, for example: --save_precision float.

--precision: Optional argument that specifies the precision for model fusion. Valid options are "float", "fp16", and "bf16". The default value is "float". To use this argument, an additional value must be provided after specifying the argument in the command line, for example: --precision fp16.

--sd_model: Optional argument that specifies the stable diffusion model to load. It can be a ckpt or safetensors file. If this argument is omitted, LoRA model fusion will be performed. To use this argument, an additional value must be provided after specifying the argument in the command line, for example: --sd_model model.ckpt.

--save_to: Optional argument that specifies the destination file name for saving. It can be a ckpt or safetensors file. If this argument is omitted, the default value will be used. To use this argument, an additional value must be provided after specifying the argument in the command line, for example: --save_to merged_model.ckpt.

--models: Optional argument that specifies the LoRA models to be merged. It can be a list of ckpt or safetensors files. To use this argument, one or more additional values must be provided after specifying the argument in the command line, for example: --models model1.ckpt model2.ckpt.

--ratios: Optional argument that specifies the ratios for each LoRA model in the fusion. It should be a list of numerical values. To use this argument, one or more additional values must be provided after specifying the argument in the command line, for example: --ratios 0.5 0.5.