In the ever-evolving domain of generative AI, a new contender has emerged, captivating the attention of tech enthusiasts and AI aficionados alike. Known as "Fooocus," this AI application has been rigorously tested and has proven its mettle, particularly in its primary design for SDXL base models. I've had the opportunity to delve into its capabilities and here's why I find it impressively good.
Firstly, Fooocus's compatibility with SDXL base models, while also accommodating SD models as refiners, sets it apart in the generative AI landscape. This focus on SDXL models does not render it inaccessible to users with less powerful hardware. For instance, the developer successfully tested it on a Nvidia laptop, achieving a generation speed of 1.35 seconds per iteration. This feat, accomplished on a system with 16GB of system RAM and 6GB of VRAM (Nvidia GeForce RTX3060 Laptop), speaks volumes about its efficiency and accessibility.
Moreover, Fooocus operates via the Gradio interface, akin to A1111 and SD.Next, ensuring ease of use. Its availability on Google Colab and Runpod further enhances its reach, making it a versatile tool for a wide array of users. While it might not offer the extensive customization features of A1111, Fooocus excels in the realm of simple image creation. This is evidenced by the examples in my gallery (links provided), where some images were crafted using the integrated model, while others utilized SDXL models from Civitai, all without any issues.
On a personal note, my current computer setup includes an RTX 3090 with 24GB VRAM and 64GB RAM. When comparing SDXL image creation times between A1111 and Fooocus under the same configurations, Fooocus significantly outperforms its counterpart. Where A1111 managed to produce one image, Fooocus had already completed four.
The ease of Fooocus's setup and installation process, which runs automatically, adds to its appeal. Users interested in experiencing its prowess can simply download the latest version from the GitHub releases section (note: the file size is quite substantial).
In terms of prompting, Fooocus simplifies the process. Users can input straightforward prompts without the need for complexity. For example, one of my prompts was, "A woman sits in the park on a bench. The moon shines brightly as the woman smokes her cigarette." Can you spot this image in my posts?
In conclusion, while this is just my perspective, I encourage others to explore Fooocus for themselves. Its performance, ease of use, and accessibility make it a noteworthy addition to the generative AI space.
I've now tested Fooocus within a virtual machine equipped with an RTX2060 (6GB VRAM) and 8GB of RAM. Compared to the performance on my RTX3090 setup (24GB VRAM), the image generation times have tripled. However, in my opinion, they remain acceptably efficient. This further showcases Fooocus's adaptability and performance across different hardware configurations, reinforcing its position as a versatile tool in the generative AI landscape. Even with more modest hardware, Fooocus continues to deliver impressive results, making it a commendable option for enthusiasts who may not have access to top-tier systems.