Sign In

Genshin TCG Style [Wan 1.3B]

22

172

0

7

Verified:

SafeTensor

Type

LoRA

Stats

172

0

3

Reviews

Published

Jun 29, 2025

Base Model

Wan Video 1.3B t2v

Training

Steps: 15,120
Epochs: 56

Trigger Words

Genshin_TCG

Hash

AutoV2
3E8A8FDEAD

Trigger Word: Genshin_TCG
Model: Wan 2.1 t2i 1.3B
Recommended LoRA strength 0.75-1.0
All examples are generated with CFG=6
For inference used Kijai's workflows

The version for Wan 14B can be found here https://civitai.com/models/1768496/genshin-tcg-style-wan-14b

Training Details

It turned out to be more difficult to train the Wan 1.3B version on character animation than I thought. I had to do a lot of experiments to achieve an acceptable result. For the training, a dataset of 54 short videos with cards from the Genshin Genius Invocation TCG card game was used. Since I used diffusion pipe for training, I'll just post the toml files.

For dataset:

resolutions = [[514, 304]]
enable_ar_bucket = true
min_ar = 0.5
max_ar = 2.0
num_ar_buckets = 7
frame_buckets = [1, 32, 36, 40, 42, 64, 71, 78, 80, 81]

[[directory]]
path = "/home/user/Genshin_TCG_dataset/videos/304_514"
num_repeats = 5
resolutions = [[514, 304]]

[[directory]]
path = "/home/user/Genshin_TCG_dataset/videos/368_620"
num_repeats = 5
resolutions = [[620, 368]]

[[directory]]
path = "/home/user/Genshin_TCG_dataset/videos/492_828"
num_repeats = 5
resolutions = [[808, 480]]

For train:

output_dir = "/home/user/Genshin_TCG/1_3B"
dataset = "/home/user/Genshin_TCG_dataset/config/dataset_v002.toml"

epochs = 80
micro_batch_size_per_gpu = 1
pipeline_stages = 1
gradient_accumulation_steps = 1
gradient_clipping = 1
warmup_steps = 100
eval_every_n_epochs = 1
eval_before_first_step = true
eval_micro_batch_size_per_gpu = 1
eval_gradient_accumulation_steps = 1
save_every_n_epochs = 1
activation_checkpointing = true
partition_method = "parameters"
save_dtype = "bfloat16"
caching_batch_size = 1
steps_per_print = 10
video_clip_mode = "single_beginning"

[model]
type = "wan"
ckpt_path = "/home/user/Wan2.1-T2V-1.3B"
dtype = "bfloat16"
transformer_dtype = "float8"
timestep_sample_method = "logit_normal"

[adapter]
type = "lora"
rank = 64
dtype = "bfloat16"

[optimizer]
type = "adamw_optimi"
lr = 7e-5
betas = [0.9, 0.99]
weight_decay = 0.01
eps = 1e-8