WARNING!!
The model was trained on a very small dataset, so I strongly recommend using all tags and setting the strength to 0.6-0.8 to prevent LoRA from creating a mess.
You can also replace "hot cap" with something else.
The model was trained on a very small dataset, so I strongly recommend using all tags and setting the strength to 0.6-0.8 to prevent LoRA from creating a mess.
You can also replace "hot cap" with something else.