Sign In

How to use the Zenkai system

How to use the Zenkai system

At the time of writing these instructions apply to checkpoints Johnson Diffusion Zenkai, Pegasus Zenkai & Chimera Zenkai. Also using any other model with the Lora/Lycoris Johnson Desu Zenkai for SD1.5 and 2.1-768

These Methods all apply to txt2img, img2img, txt2vid & vid2vid.
once you learn the styles and how they mix together, you can use those words to include those styles into your image. When there are so many styles to choose from, I built a randomizer system that works through captions. It can be used in many ways with randomizers, some are explained below.


Zenkai-desu.txt - all captions in the 350 set (beta2) (updated)

Zenkai-style.txt - all style triggers only in the 350 set (beta2) (updated)

Woman.txt - contains ~370 Captions used in the djz Woman Dataset

> Beta2 dataset contains 350 styles

/Beta2/* contains .txt files, all 350 captions by stylename.

/Beta2-new-styles/* contains only the newest 124 style captions.

> Beta1 dataset contains 230 styles

/Beta1/Zenkai-random.txt - ~4200 "GPT2 Generated Prompts" trained on the Caption List

/Beta1/Zenkai-prompt.txt - ~5000 Example prompts used in captioning the 230 set

Both the Beta1 & Beta2 model is currently trained to Epoch 19 (~97000 steps.

The "Zenkai-styles" and "Zenkai-desu" wildcards have been updated to include all 350 total styles.

Beta2 has increased Zenkai dataset size by 50% over Beta1 as new styles were added to the main training set.

There are many ways to use this model as it contained all my CKPT styles in one model.

1. You do not need to use Triggers and can prompt normally.

2. You can prompt using trigger/activation words listed in Zenkai-styles.txt to anchor to a specific style, or use multiple triggers with attention weighting to control the output.

3. You can use one or more prompts as shown in the Zenkai-desu.txt.


How to use: (automatic1111 webui) or (VLAD Automatic1111 Fork)


NOTE: Zenkai System is working for both LORA & CKPT Versions. see wildcard section below.

EXAMPLE: for Lycoris Version:

photo, Zenkai-style style, Zenkai-random <lyco:JohnsonDesuZenkai-V21-Lyco:1.0>

in the above example, the Zenkai-style.txt is used to draw a random style (1/230), then the word style is written because LORA versions are trained on "modelname style". Then Zenkai-random.txt is used to draw from the GPT2 trained generator Prompts (1/4200).

EXAMPLE 2: For Lora Version:

photo, Zenkai-desu <lora:JohnsonDesuV21-Zenkai:1>

In the above example, the Zenkai-desu.txt is usd to draw the actual trained captions at random into the prompt while uisng the LORA version. This will work for V21 and V15. As with all the DJZ Lora's you can mix and match Caption terms and use with or without the style triggers.


Using the "Dynamic Prompt Extension"

You can place the Wildcard files found in into:



I recommend using all my V21 negative prompt helpers with the Beta1 release

place them here: "stable-diffusion-webui\embeddings"

and then use them in the negative prompt section matching the filename


some examples will need this extension for advanced prompts


Wildcard files:

> beta2 <

Zenkai-style.txt (350 style/concepts)

Zenkai-desu.txt (~8000 Captions used in LORA Dataset)

> beta1 <

Zenkai-prompt.txt (~5000 Captions used in CKPT Dataset)

Zenkai-random.txt (4200 GPT2 Generated Prompts)


Prompt Examples:



The first two prompts function the same, with a slight difference in use. "Zenkai-Desu" is the most complete wildcard containing the style trigger and the prompt from the trained caption. "__beta2/*__" will choose a random text file and then a random line from within that. The difference is that when you find a style that you like, you can use the name of the style and access all the captions for that style. The third example above shows use with "alieninteriors style"



The first example shows how we normally call up the randomizer which only brings in the style trigger/activation word and not the whole caption/prompt. This is using all 350 styles. The second example shows how to choose a trigger using only the newest 125 styles added into the Beta2 version of the Zenkai System. as with the previous new feature, once you find a style you like, you can dial that in by choosing the name of the style and only access the captions for that trained style. The third example shows how to drill down into a specific style eg. sandcastle.

All the old methods below still work, but pay attention to the path of your Captions!

"__Zenkai-style__, Beta1/Zenkai-prompt"

This will choose one of the Dataset Style names, then use one of the Dataset Captions as the prompt.

"__Zenkai-style__, Beta1/Zenkai-random"

This will choose one of the Dataset Style names, then use Generated Prompts as the prompt. Zenkai-random.txt was generated by a GPT2 model trained on Zenkai-prompt.txt

"__Zenkai-style__, Beta1/Zenkai-prompt, Beta1/Zenkai-random"

This will mix the Generated Prompts with the actual captions according the Seed.

"__Beta1/Zenkai-prompt__, Beta1/Zenkai-random"

This drops the Style cues and only used interpolated captions/generated prompts.

"photo, (DarkExecutioner:0.3)(PromptEater:0.3)(TentacleBeast:0.3)"
"photo, (DarkExecutioner:0.3)(PromptEater:0.3)(TentacleBeast:0.3) ADD IN ANY PROMPT HERE"
"photo, (__Beta2/DarkExecutioner__:0.3)(__Beta2/PromptEater__:0.3)(__Beta2/TentacleBeast__:0.3)"
"photo, (__Beta2/DarkExecutioner__:0.3)(__Beta2/PromptEater__:0.3)(__Beta2/TentacleBeast__:0.3) (ADD IN ANY PROMPT HERE:1)"

The first example shows how you can blend many styles using weighted triggers. The second example shows how you can then add any prompt you like to the end to build from there. The third example shows how to access all the possible captions using wildcards with the style names you had already chosen. The fourth example is showing how you can then build a prompt up from that style.


"photo [DarkExecutioner:PromptEater:TentacleBeast: 0.3,0.6:bezier] scifi"
"photo [__Beta2/DarkExecutioner__:__Beta2/PromptEater__:__Beta2/TentacleBeast__: 0.3,0.6:bezier] scifi"

This will take three Style Triggers and move from the first for 1/3 of the total steps to the second for 2/3 of the steps, however due to bezier, it will not reach that style and instead will arc toward the final trigger for the last 3/3 of the total step count. Altering the numbers can adjust the curve. Any Trigger can be used from Zenkai-style.txt - In fact whole prompts can be used here aswell.

"photo [a man standing in front of a futuristic city:a bunch of televisions that are in the dark:city built on top of a mountain peak: 0.3,0.6:bezier] scifi"


If you have Regional Prompt Extension you can also use this newer method:

photo BREAK
(DarkExecutioner style:0.3) BREAK
(PromptEater style:0.3) BREAK
background landscape

This can also be used in a number of ways combined with the above. Also we can take advantage of the Wildcards. To randomly choose a wildcard from a folder for example ``Beta2``

photo BREAK
(__Zenkai-Desu__:0.3) BREAK
(__Beta2/*__:0.3) BREAK
background landscape

this allows you to use a random caption in three different ways. When we see a style we like we can lock it down in two ways.

  1. Lockdown the styles only

    photo BREAK
    (__Zenkai-Desu__:0.3) BREAK
    (__Beta2/*__:0.3) BREAK
    background landscape
  2. Lockdown the styles with random captions from each style

    photo BREAK
    (__Beta2/DarkExecutioner__:0.3) BREAK
    (__Beta2/PromptEater__:0.3) BREAK
    background landscape
  3. Lockdown the captions in full

    photo BREAK
    (DarkExecutioner style a painting of a robot with a cross on it's head:0.3) BREAK
    (PromptEater style a painting of a person wearing a gas mask:0.3) BREAK
    background landscape

I hope this serves as a good introduction to how you might use this Collection. There are far more ways to use this, so if you discover a nice method, contact me and show it off :)

Zenkai is the name of my large dataset. Beta1 contains 230 of my previous models, Beta2 contains 350. It continues to grow!

Thanks for reading!