Sign In

Deep Negative V1.x

20k
334k
9.1m
923
0
Verified:
PickleTensor
Type
Embedding
Stats
313,213
9,037,929
Uploaded
Jan 20, 2023
Base Model
SD 1.5
Training
Steps: 10,000
Epochs: 100
Trigger Words
ng_deepnegative_v1_75t
Hash
AutoV2
54E7E4826D
0
0
0
0
0

This embedding will tell you what is REALLY DISGUSTING🤢🤮

So please put it in negative prompt😜

TOP Q&A

  • how to use TI model?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion

  • what is negative prompt?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt

[Update:230120] What does it do?

These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.

-

What is 2T 4T 16T 32T?

Number of vectors per token

[Update:230120] What is 64T 75T?

64T: Train over 30,000 steps on mixed datasets.

75T: embedding limit maximum size, training 10,000 steps on a special dataset (generated by many different sd models and special reverse processing)

Which one should choose?

  • 75T: The most ”easy to use“ embedding, which is trained from its accurate dataset created in a special way with almost no side effects. And it contains enough information to cover various usage scenarios. But for some "good-trained-model" may hard to effect

    and, change about may be subtle and not drastic enough.

  • 64T: It works for all models, but has side effect. so, some tuning is required to find the best weight. recommend: [( NG_DeepNegative_V1_64T :0.9) :0.1]

  • 32T: Useful, but too more

  • 16T: Reduces the chance of drawing bad anatomy, but may draw ugly faces. Suitable for raising architecture level.

  • 4T: Reduces the chance of drawing bad anatomy, but has a little effect on light and shadow

  • 2T: ”easy to use“ like T75, but just a little effect

Suggestion

Because this embedding is learning how to create disgusting concepts, it cannot improve the picture quality accurately, so it is best used with (worst quality, low quality, logo, text, watermark, username) these negative prompts.

Of course, it is completely fine to use with other similar negative embeddings.

More examples and tests

How is it work?

I tried to make SD learn what is really disgusting with deepdream algorithm, the dataset is imagenet-mini (1000 images chosen randomly from the dataset again)

deepdream is REALLLLLLLLLLLLLLLLLLLLLY disgusting 🤮 and process of training this model really made me experience physical discomfort 😂

Backup

https://huggingface.co/lenML/DeepNegative/tree/main