Hi I’m new to this community or website whatever you want to call it. I just wanted to know, how do you make models?
There are a few ways actually
You can generate on civitai (This Website)
You can use third party services
You can do it yourself on your PC
I guess you wanted to know the first
You'll notice there's a create button on your screen at bottom right corner
Click on it
It'll open a sidebar
Select your model from there and enter the prompts
Then click on generate
Wait for some time
and you'll have your images
Hi, if you're talking about making models (checkpoints) there is three ways to do it.
Creating checkpoint from scratch, the way, unavailable from most people, cause it need a lot of GPU power and time. Also, you need a humongous amount of pictures in dataset.
Finetuning checkpoint using your own dataset. This method is much easier, but still requires a lot of computing power (and VRAM), and you still need a good quality tagged dataset.
Merging several checkpoints, the easiest way to do your own "MIx" as it called. There is a separated tab in Automatic1111 WebUI for this. You can just merge your models by different methods (all formulas are provided on the same tab) to get pretty good results.
I'm not sure about creating model from scratch, but for finetuning models you'll definitely need a trainer. Basic Kohya or EveryDream.
If you're talking about training LoRAs (small models for characters, poses, clothes, styles etc) you'll need trainer as well. You can use Kohya, as for Finetuning, of much easier to understand soft (based on Kohya) - LoRA Easy Training Scripts. This would work only of you're training model/LoRA on your ->local<- machine (your PC), there is also a method to train LoRAs via Google Colab, you can find more detailed explanation on the internet.
For tagging any (anime) dataset you can use WD14 tagger extension for Automatic1111, it will tag all your images in dataset with booru tags. If you want to do LoRA for realistic models, you can use basic (uhhh) BLIP or CLIP tagger for start.
I recommend you to learn about merging models and (if you have enough VRAM, at least, 12Gb) about finetuning models. Actually, even training LoRAs on 8Gb can cause some problems, so i do recommend to train LorAs locally only if you have 12Gb VRAM at least. In other cases it would be better to use Google Colab (i guess).