You can train your own font models easily with Flux, and it's very simple.
Link to font model
Thanks to user juaner0211589 for sharing how they created their font models.
This test was also inspired by OpenAI's GPT-o model samples to showcase how they are able to generate font-sheets like this using ChatGPT. https://openai.com/index/hello-gpt-4o/
Note: This was just inspiration. I didn't use ChatGPT to create the dataset.
The dataset was created manually using an existing font and Photoshop.
The Dataset
With the prompt:
SC2StyleFont a font style with the letters ABCDEFG on the first row. HIJKLM on the second row. OPQRST on the third row. UVWXYZ on the last row. With a black background.
It's likely you don't need the whole alphabet, just a few words or maybe even one word may be enough.
I went with this approach as it made sense to me. I didn't show it any lowercase letters, but it creates them just fine.
To clarify: The entire dataset was just these 2 images. Nothing more.
Training Settings
{
"engine": "kohya",
"unetLR": 0.0005,
"clipSkip": 1,
"loraType": "lora",
"keepTokens": 0,
"networkDim": 2,
"numRepeats": 200,
"resolution": 512,
"lrScheduler": "cosine_with_restarts",
"minSnrGamma": 5,
"noiseOffset": 0.1,
"targetSteps": 1000,
"enableBucket": true,
"networkAlpha": 16,
"optimizerType": "AdamW8Bit",
"textEncoderLR": 0,
"maxTrainEpochs": 10,
"shuffleCaption": false,
"trainBatchSize": 4,
"flipAugmentation": false,
"lrSchedulerNumCycles": 3
}
Looking at the training samples below, you can see that the model understands the font style you are looking for almost right away.
My model is certainly overfit. You may want to have more examples or change some of the parameters. If you get better results with other settings, please share below and I'll add it as a note to the article.
Note:
I'm using a batch size of 4 here, even though I only have 2 images. I don't know if this is bad, but the model came out fine, so it seems the training script writers are smarter than I am stupid, so that's great!
Training Samples
Even at the first epoch we can see that it's caught onto things.
On the first image, we see the start of some serifs being added, and more importantly, the gap between the top line and the letter (on D and E).
On the second image we get the overall style. The bevel and blue color. It also has the gap.
The third image feels pretty good as well. It has the serifs and the overall boldness of the style.
On epoch 2 we get some very good results.
The first image shows the style pretty well learned. It has stretched it out, but that's fair.
The second image has the bevel and the outline I intended with my white image example.
The third image looks very good. I'd consider this to have the entire style learned, including lowercase letters which it had no examples of.
On Epochs 3-6 we get good results. It's being creative, and it shows part of the style on all samples. The sweet spot is likely somewhere in here. My preferences are 3, 4 or 5.
At epoch 7 and forward I feel like we may be overfit. It's not showing as much creativity and the resulting images are a bit flat.
It still has the font style in most cases, but it's a bit harder than the letters in my dataset.
Text Prompting Flexibility
We all know we can use Flux to create decent text, funny posters and whatnot.
The question is if a custom font model like this is flexible enough to be able to tweak it after, like any text you would generate, while still having the feeling of the font. The answer is yes. It's still very flexible, yet guided by the font. Depending on which epoch you chose, and if it was overfit, as well as the model weight used.
Here are some example images.
As we can see, we can change the color of the font, add backgrounds behind it, and even change the texture and materials of the font.
Conclusions
Training fonts work. There are likely some good and optimal settings to find.
I'm sure you can train it on a non-stylized font (i.e. black font on white text) as well.
I ended up posting epoch 10 of the model.
Looking at results of using weight 1:
https://civitai.com/images/26709386 - https://civitai.com/images/26709515
We can see that instead of using the smooth gradient that I wanted, we are getting some weird inset of the font. This is definitely not the desired result, but it comes from overtraining the model so it can absolutely be avoided.
This must have taken ages to test, how can I help?
Not really, but hey, thanks for asking! Please feel free to throw some ⚡⚡⚡ onto the model or this article if you appreciated it. More ⚡⚡⚡ means more training for me, which means more articles for you, so we all win, right?