Type | |
Stats | 105 2 |
Reviews | (13) |
Published | Feb 28, 2025 |
Base Model | |
Training | Steps: 310,000 Epochs: 40 |
Hash | AutoV2 A88C1F5578 |
Update 2/28/25: I release the first successful Flat model (it take too many attempts to count)
The difficulty of this method required more time. After many trails it has become better than Squeeze. I did not expect this...
Details
Base Model: Illustrious
Detox Methods: Flat (v1)
Starting Tips: Use artist tags before using quality tags. Quality tags can improve the visuals, but can remove variability and knowledge of concepts. All detox models require specific prompting. If you do not specify you want an element, it will not read your mind (there is still some natural variance).
Positive Tags (optional): original,newest,masterpiece,best quality,amazing quality,high quality,very aesthetic,absurdres,highres
Using too many positives can make generation slow to denoise (more detail must be added), so use higher steps if you have trouble.
Negative Tags (optional): worst quality,low quality,normal quality,scanned,scanlines,sketch,unfinished,jpeg artifacts,lowres,blurry
bold = destructive, italics = unreliable, quality negatives can make style worse (use as you prefer)
Sampler: Euler A, DPM++.
Scheduler: Normal/Karras/Beta
Update 12/7/24: I release the first successful Squeeze model (it take 4 attempts)
I do not like the results of the other methods yet, so I will wait. It will most likely be a Smooth model because it is closest to Squeeze for quality.
Details
Base Model: Illustrious
Detox Methods: Squeeze (v1)
Starting Tips: Use artist tags before using quality tags. Quality tags can improve the visuals, but can remove variability and knowledge of concepts. All detox models require specific prompting. If you do not specify you want an element, it will not read your mind (there is still some natural variance).
Positive Tags (optional): original,newest,masterpiece,best quality,amazing quality,high quality,very aesthetic,absurdres,highres
Using too many positives can make generation slow to denoise (more detail must be added), so use higher steps if you have trouble.
Negative Tags (optional): worst quality,low quality,normal quality,sketch,unfinished,jpeg artifacts,lowres,blurry
bold = destructive, italics = unreliable, quality negatives can make style worse (use as you prefer)
Sampler: Euler A, I don't test others and it is expected they will fail, Squeeze models expect noise will be added each step.
Scheduler: Normal/Karras
Detox Methods
The base model is processed and retrained using a specific objective. The methods are below.
Squeeze - Maximize the details and prompt understanding. It will discard bad parts of the base model before retraining, so LoRAs sometimes break if the base model is heavily poisoned.
Smooth - Maximize the intersection of the Squeeze method and the base model. It will share more of the base model, so it is good for LoRAs.
Flat - Maximize the stability and do more retraining after. It will discard many parts of the base model, for example the Flat v1 is similar to base SDXL rather than Illustrious. This method might ignore or break your LoRA. It will be a unique model which has consistent details and prompt understanding. The goal of consistency allows it to produce unfavorable results, but it will be a good base model for finetuning (I have not tested this). The model will be sensitive to new data because this method tries to distribute them evenly.
Shine - Maximize the details, but it can ignore prompt knowledge. This method is almost successful now that I rework it to be better. It is now easy to control, and seem to be okay for LoRAs still, but there is blur issue, likely need more training time.
Intro
This is a series of models named "Detox" models. The name "Detox" means that we apply a destructive finetuning process to a base model that will remove many poisoned weights and replace them with freshly trained weights. A specific set of objectives is chosen from a list of designed methods. These objectives will modify the finetuning process to target the method and result that is desired.
What is poison?
If you see some models trained with generated images, these are considered poisoned because the generation process has noise in all images. If the training data does not specify the difference between normal images and generated images, this noise will be present in the trained model for all images it creates. There are some further issues with generated images, such as the prompts which can include hallucinations. Generated images from SD1 do not respect the prompt, and SDXL can fail this too, but it is less common. If a model is trained to hallucinate, it will do it very well!
Usage
Follow usage instructions of the base model found here:
N/A, refer to the Details section above and example images.