santa hat
deerdeer nosedeer glow
Sign In

Deep Negative V1.x

26.9k
443.2k
32.9m
2.5k
Verified:
PickleTensor
What did you think of this resource?
Type
Embedding
Stats
419,297
32,544,687
Reviews
Published
Jan 20, 2023
Base Model
SD 1.5
Training
Steps: 10,000
Epochs: 100
Trigger Words
ng_deepnegative_v1_75t
Hash
AutoV2
54E7E4826D
default creator card background decoration
lenML's Avatar
lenML

This embedding will tell you what is REALLY DISGUSTING🤢🤮

So please put it in negative prompt😜

⚠This model is not trained for SDXL and may bring undesired results when used in SDXL.

If you use SDXL, recommended this 👇

another deep-negative:

TOP Q&A

  • how to use TI model?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion

  • what is negative prompt?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Negative-prompt

[Special Reminder] If your webui reports the following errors:

- CUDA: CUDA error: device-side assert triggered

- Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds" failed

- XXX object has no attribute 'text_cond'

Please try using a model version other than 75T.

> The reason is that many scripts do not handle overly long negative prompt words (greater than 75 tokens) properly, so choosing a smaller token version can improve this situation.

[Update:230120] What does it do?

These embedding learn what disgusting compositions and color patterns are, including faulty human anatomy, offensive color schemes, upside-down spatial structures, and more. Placing it in the negative can go a long way to avoiding these things.

-

What is 2T 4T 16T 32T?

Number of vectors per token

[Update:230120] What is 64T 75T?

64T: Train over 30,000 steps on mixed datasets.

75T: embedding limit maximum size, training 10,000 steps on a special dataset (generated by many different sd models and special reverse processing)

Which one should choose?

  • 75T: The most ”easy to use“ embedding, which is trained from its accurate dataset created in a special way with almost no side effects. And it contains enough information to cover various usage scenarios. But for some "good-trained-model" may hard to effect

    and, change about may be subtle and not drastic enough.

  • 64T: It works for all models, but has side effect. so, some tuning is required to find the best weight. recommend: [( NG_DeepNegative_V1_64T :0.9) :0.1]

  • 32T: Useful, but too more

  • 16T: Reduces the chance of drawing bad anatomy, but may draw ugly faces. Suitable for raising architecture level.

  • 4T: Reduces the chance of drawing bad anatomy, but has a little effect on light and shadow

  • 2T: ”easy to use“ like T75, but just a little effect

Suggestion

Because this embedding is learning how to create disgusting concepts, it cannot improve the picture quality accurately, so it is best used with (worst quality, low quality, logo, text, watermark, username) these negative prompts.

Of course, it is completely fine to use with other similar negative embeddings.

More examples and tests

How is it work?

I tried to make SD learn what is really disgusting with deepdream algorithm, the dataset is imagenet-mini (1000 images chosen randomly from the dataset again)

deepdream is REALLLLLLLLLLLLLLLLLLLLLY disgusting 🤮 and process of training this model really made me experience physical discomfort 😂

Backup

https://huggingface.co/lenML/DeepNegative/tree/main