Pictures here: https://civitai.com/posts/215686
Model here: https://civitai.com/models/66852/bad-card-template
Problem
It's not possible to generate card templates for your boardgames/collectible card games with stable diffusion. This is because cards are not square, and they have been cropped by stable diffusion, and only the middle of the card is rendered.
This model
Trained on 29 not very good images.
There are several checkpoints:
early checkpoints that do not render a correct card, but still keep the style of the original model (called bct-render*.safetensors)
later checkpoints that render a correct card, but replace the base model style by the style of the not very good dataset I used (called bct-cardshape*.safetensors)
so you need to fiddle a bit ...
Prompting
typical prompt used in training
a blank trading card, light blue, art deco style, with a title bar, an illustration box shaped like a planet, a subtitle bar, and a grey text box
a blank trading card, pale blue, texture style, with a title box on top with a symbol, a white illustration box and a small pale blue text box at the bottom
a blank trading card, blue with gold frame, texture style, with space for a title, a black drawing space, and a cloudy blue text box with card statistics
so "a blank trading card" trigger word, the main colors (green, pale blue and gold), a style (art style or 'heroic fantasy', 'steampunk'), and a card structure description with "title", "illustration/drawing","subtitle" and "textbox". Example prompts in the images for v1.0.
typical negative prompts:
those that avoid writing, because the base model still remembers game cards having text: text, english_text, japanese_text, kanji, markings, watermark, signature, artist name,
those that avoid having several cards: several cards, many cards, two cards
Using this model
Download all the Loras that begin with v1.0 and place them in the Lora folder (4 cardshapers and 3 card renders)
prompt with OpenJourney v1.4 and a 'cardshaper' Lora until you find a card with a layout you like
now switch to the checkpoint you like and use this card outline as a controlnet (basically, we're taking the drawn card and using the outline, but will now use a Lora that cannot draw cards very well but transfers style well, see gallery)
Controlnet X Enable
put the card picture you like as the controlnet image
Preprocessor: softedge_hed
Processor: controlnet_hed
Starting Control Step: 0.0
Ending Control Step: ~0.35 (higher: shape is kept more, lower: the base model can be more creative and add some flourishes)
now is the time to add 'best quality, masterpiece, NOT bad painter, etc...' to the prompt
Generate with a 'cardrender' Lora
params I use: lora:bct-render3:0.35, DPM++ SDE Karras, ~25 steps, 0.1 Start Control Step, 0.35 End Control Step, see remarks
you should have a nicely generated card
Additional remarks
you can play with Ending Control Step, lower values let the model improvise the final shape more, higher will get the exact shape (but maybe not as good looking)
Lora weight 0.35 works well, higher means a more card-like texture but worse aesthetics, lower means less card-like texture (a drawing instead of a card) but better look
you can play with Starting Control Step (~0.1), this is an even worse hack, it lets the model draw on its own a bit and then gives it the card boundaries, and gets better textures
if you need somewhere to put a stat (attack/def, HP, ...), you can try to prompt with 'and a symbol in the shape of a shield' or 'and a symbol shaped like a heart', there is a small chance that it will work ...
try to add 'complex','intense','laser','pop art' to the prompt for more intense design, 'simple' or 'basic' for dull colors and simple design (from the training set). Try 'digital art style' or 'drawing style' for some very unique designs (without a style name the different parts will be random apparently).
if you can draw a bit, you can take the generated black/white card outline (softedge_hed output), edit the outline yourself, put it as controlnet input and disable the preprocessor
On the plus side, using a controlnet means you can make a card for each color/faction with the same outline but a slightly different style/color
cardshaper4 and render3 were trained on a previous dataset with text everywhere, they tend to generate noisy images but the results can be spectacular (not exactly sure why, the only clue is that cardshaper4 produces softedge_hed images that are less defined and less intense)
there were some nice checkpoints while training v2, added render4 (some large uniform areas and small areas with lots of details, I don't know what's the name for it) and render5 (good textures)
let's be honest, the cardshaper* generate a better image of a box, once you have the card outline in controlnet, you can try to render the final card with "a steampunk machine control panel", "a medieval scroll", etc... and don't have to use the render* Loras
the usual software can be used to make the cards: https://www.reddit.com/r/boardgames/comments/7rcmjn/card_creation_software/
Apologies
Yeah, this model is not very good, but hopefully it can generate enough cards to eventually make a good Lora ...