santa hat
deerdeer nosedeer glow
Sign In

Why Clip skip 1 and 2 works very different?

This is AingDiffusion v11.3

I applied more mature-looked dataset to the model (a.k.a I trained a Lora then baking it directly to the model). It seems fine on Clip skip 1, but it seems like a complete mess on Clip skip 2 (sometimes the dataset took effect, sometimes it's not).

How is this happening? Is there a reason behind this?

2 Answers

You can google 'clip skip' for more details, but my understanding of it is that each clip is basically an extra calculation performed on your prompt to further refine it, potentially bringing it closer to a wanted end result. 'Clip skip 1' skips one of these calculations, 'Clip skip 2' skips two of them, etc. Clip skip is technically usable on SD models created prior to 2.0 and their derivatives, but isn't actually meant to be applied unless a model has specifically been trained with clip skip in mind, such as the NAI model and its derivatives.

With this basic info in mind, it's possible that, when you're using 'Clip skip 2', it's skipping the data contained in the LoRA you're trying to apply when it otherwise might not.

This What exactly is clip skip? · AUTOMATIC1111/stable-diffusion-webui · Discussion #5674 · GitHub

is probably the best explanation you can find without diving into how Clip works. By using clip skip you are ignoring the last X layers (of the prompt interpretation process) which generally means that the image becomes less specific and doesn't adhere as much to the text prompt.

Looks like AingDiffusion stopped using clip skip after v9.0 for some reason, so I guess you want to stick to clip skip 0 then.

Your answer