📣 FFUSION AI SD2.1- 768 BaSE Public 1.0.0Release is Here!
Diffusers available at https://huggingface.co/FFusion
STABLE DIFFUSION 2.1 768+ MODEL
before complaining about usage if you haven't used 2.1 stick to 1.5 models
Introducing FFusion.AI-beta-Playground on Hugging Face Spaces!
We're thrilled to announce the launch of our new application, FFusion.AI-beta-Playground, now live on Hugging Face Spaces! This cutting-edge tool harnesses the power of AI to generate stunning images based on your prompts.
With FFusion.AI-beta-Playground, you can:
Experiment with different schedulers to fine-tune the image generation process.
View the generated images right in your browser and save them for later use.
Our application is built on top of the diffusers library and uses StableDiffusionPipeline for image generation. It's powered by Gradio for a user-friendly interface. And here's the exciting part: very soon, it will run on a CUDA-enabled environment for optimal performance, thanks to our partners at RUNPOD!
Stay tuned for this upcoming enhancement that will take your image generation experience to the next level. We're thrilled to be partnering with RUNPOD.io to bring you this cutting-edge technology.
To get started, simply enter your prompt, select the models you want to use, choose a scheduler, and let our application do the rest.
We're excited to see what you'll create with FFusion.AI-beta-Playground. Your feedback is invaluable to us, so please don't hesitate to share your thoughts and suggestions. Enjoy exploring the possibilities of AI-powered image generation!
🔭 We are thrilled to launch the public beta release of FFUSION Ai, though we want to clarify that it's currently limited in its breadth. Having been trained on just a fraction of our full image collection (20%), the capabilities of the model are not yet fully realized. This early version is primarily intended for experimentation with various prompt combinations and initial testing.
💡 While we're committed to delivering the highest level of excellence, we want to highlight that our model, notably the Unet component, is still developing its proficiency with certain objects and faces. But fear not, we're actively fine-tuning these areas as we progress towards the final release.
🙏 A huge shout out to our Reddit community for their support in alpha testing and for helping the text encoder respond to some exciting fuse ideas. We couldn't have come this far without you!
💡 Your contribution in this beta testing phase is extremely crucial to us. We invite you to explore the model extensively, experiment with it, and do not hesitate to report any prompts that don't meet your expectations. Your feedback is our guiding light in refining the performance and overall quality of FFUSION Ai.
⚠️ Attention: The model is based on Stable Diffusion 2.1 - 512 and is designed for optimal performance up to a resolution of approximately 600-700 pixels. For larger image sizes, we recommend upscaling them independently or patiently waiting for our final release that's just around the corner. This forthcoming release will enhance performance and support for higher resolutions.
👥 Thank you for being part of the FFUSION Ai beta testing community. Your support, feedback, and passion inspire us to continually develop a pioneering tool that is set to revolutionize creativity and visualization. Together, we can shape the future of storytelling and creativity.
🔮 Why not add some effects to your favorite prompts or fuse them together for a surreal twist? (Please note, Pen Pineapple Apple Pan effects and FUSIONS are excluded in this beta version)
🔒 With over 730.9449 hours of dedicated training sessions, our Fusion AI model offers a wealth of data subsets and robust datasets developed in collaboration with two enterprise corporate accounts for Mid Journey. We also pride ourselves in having an effective utilization of GPU usage, making the most out of our partnership with Idle Stoev, Source Code Bulgaria, Praesidium CX & BlackSwan Technologies. 🚀
Full transparency on our extensive 700,000-image dataset, training methodologies, classifications, and successful experiments is on its way. This information will be released shortly after the final version, further establishing FFUSION Ai as a trusted tool in the world of AI-powered creativity. Let's continue to imagine, create and explore together!
Model Overview: Unleashing the Power of Imagination!
FFUSION AI is a state-of-the-art image generation and transformation tool, developed around the leading Latent Diffusion Model. Leveraging Stable Diffusion 2.1, FFUSION AI converts your prompts into captivating artworks. Discover an imaginative landscape where ideas come to life in vibrant, surreal visuals.
Developed by: Idle Stoev, Source Code Bulgaria, Praesidium CX & BlackSwan Technologies
Shared by: FFusion AI
Model type: Diffusion-based text-to-image generation model
Language(s) (NLP): English
License: CreativeML Open RAIL++-M License
Model Use: Enabling Creativity and Exploring AI Frontiers
Designed for research and artistic exploration, FFUSION AI serves as a versatile tool in a variety of scenarios:
Investigating biases and constraints in generative models
Unleashing creativity in artistic endeavors
Infusing AI-driven innovation into educational or creative tools
Furthering research in the exciting field of generative models
Out-of-Scope Use and Prohibited Misuse:
Generating factually inaccurate representations of people or events
Inflicting harm or spreading malicious content such as demeaning, dehumanizing, or offensive imagery
Creating harmful stereotypes or spreading discrimination
Impersonating individuals without their consent
Disseminating non-consensual explicit content or misinformation
Violating copyrights or usage terms of licensed material
Model Limitations and Bias
While our model brings us closer to the future of AI-driven creativity, there are several limitations:
Achieving perfect photorealism or surrealism is still an ongoing challenge.
Rendering legible text could be difficult without further ~30min training on your brand.
Accurate generation of human faces, especially far away faces, is not guaranteed (yet).
We are thrilled to announce:
Version 512 Beta: Featuring LiTE and MiD BFG model variations
Version 768 Alpha: BaSE, FUSION, and FFUSION models with enhanced training capabilities, including LoRa, LyCORIS, Dylora & Kohya-ss/sd-scripts.
Version 768 BaSE: A BaSE Ready model for easy applying more than 200 build op LoRA models trained along the way.
In line with our commitment to sustainability, FFUSION AI has been designed with carbon efficiency in mind:
Hardware Type: A100 PCIe 40GB
Hours used: 1190
Cloud Provider: CoreWeave & Runpod (official partner)
Compute Region: US Cyxtera Chicago Data Center - ORD1 / EU - CZ & EU - RO
Carbon Emitted: 124.95 kg of CO2 (calculated via Machine Learning Impact calculator)
That said all LoRA and further models are based on initial training.
Model Card Authors
This model card was authored by Idle Stoev and is based on the Stability AI - Stable Diffusion 2.1 model card.
Model Card Contact
Download the FFUSION AI diffusers - 768 BaSE Release here.
🔬 Intended Use: From Research to Artistry 🎨