[This comparison article is valid for AingDiffusion v15 and ElixirProject v1.1. This article might not be valid anymore in the future.]
You might have noticed that I have released a new series of models called ElixirProject. I write this article to help you differentiate between ElixirProject and AingDiffusion.
Method used
AingDiffusion is a merge of a bunch of anime models plus my dataset which is trained with the previous version of AingDiffusion.
ElixirProject is a fine-tuning of AnyLoRA, and the dataset used is also fully trained on AnyLoRA.
AingDiffusion and ElixirProject might share datasets. However, the datasets are still treated differently.
Image generation
AingDiffusion generates more stable images at the CFG scale of ~7, while ElixirProject generates more stable images at the CFG scale of ~12. This issue is more visible on generated images with white backgrounds, especially on AingDiffusion.
Full image resolution: here.
If you look at AingDiffusion, the image generated with CFG scale 12 is noisy.
ElixirProject might generate consistent-looking images between versions, while AingDiffusion will probably not due to the nature of a merged model.
Both models are intended to generate anime-like images, mostly trained on woman characters.
Both models will generate different images with the same settings and prompt.
Conclusion
AingDiffusion and ElixirProject are two different models with two different methods of training.
ElixirProject IS NOT an alternative merge of AingDiffusion.
Both models will most likely generate two different images with the same settings and prompt.
I will still handle these two models' updates.
Thank you for reading. After I wrote this, I realized how useless this article is for you and you probably wouldn't care. So, thank you.