This article will act as a sort of "Table of Contents" and Introduction to the XenoEngine Stable Diffusion WebUI I have modified to work (passably well) in a free no GPU environment on Huggingface Spaces. It's not a perfect solution, but it can be a great (and most importantly free) way to do certain SD tasks.
Other Articles (listed below) will follow with tips, tricks, and other happy things to help you make the most of it all.
If you enjoy this work, feel free to buy me a beer: https://www.buymeacoffee.com/IAmXenos
Introduction (You Are Here)
23-09-11: Updated to SD 1.6 - though SDXL is not working, it works great with 1.5 based models. Updated Xenos' Models to latest versions. Am in the process of cleaning up the embeddings, lora, and wildcard folders. Added all of Block454's TI's and created a wildcard file. Using __cc-block454__ in the prompt will pull a random Pornstar from the list.
23-07-20: Updated with latest Xenos models, updated extensions to more recent stable versions, added a repository of tools, character models, QOL helpers, etc.
Why, For the Love of Pizza, Why?
Stable Diffusion is not really meant to run without a pretty powerful GPU. That said - it can be done if you don't mind waiting a little while. It's different, though. With Google cracking down on Colab usage and pricing structures for renting GPU on other services - sometimes "slow and free" is just what you need (especially when you're just starting out). In this post, we'll go through the differences along with some of the things I've put into the HF Space as well as tricks I've learned to make the most of it.
The Bad News and the Good News
As with most things, using this build without a GPU comes at a bit of a price. The good news is that a lot of these things can be overcome. I'm a "Gimme the bad news first" kinda guy, so... let's dig in.
The Bad News
Slow Render Times: I tend to start with 384x512 renders that I then upscale 2x to 768x1024 with 20 Steps Euler A and then a Latent 10 step upscale. It takes roughly 10 minutes for these to render (a bit longer if there are LoRas involved). If you run a batch with more images, the following images add about 5-7 minutes each (not the full 10 each, but still... it's a bit of a wait.
The Cancel/Skip Button Doesn't Respond: I'm not sure if this is a Huggingface or a Docker issue - but it's probably the most annoying thing about all of this. When you see a run that you know is going to suck - you have to wait it out. (You could restart the space, but because it has to rebuild everything from scratch like your Colab or other remote installs - it often takes longer to restart than to wait it out.) I am constantly looking for a fix for this, so I hope that this item can be removed from this list at some point.
The Results are Different: If you're trying to replicate an image you've seen and have the generation info for - or if you take an image from the HF-Space and recreate it on a GPU environment - it won't be the same. Sometimes the differences are huge, other times, they're more subtle. I'm not 100% sure what the deciding factor is, but I know it comes down to the fact that there's just less RAM and overall processing power to work with here, so it trades off certain things. Images rendered with a GPU are generally what I would call "tighter" too. I can get "good" results here - but I still want to run a GPU to get "great" results.
Huggingface No GPU Render:
Identical Settings on T4 GPU:
No Go for Super High Resolution/High Detail Outputs: Once you get much above 640x as a start resolution and a 2x upscale - the system can simply crash out on you. This can be fine for web work (where you want smaller, optimized images anyway) or for making game characters for your RPG Maker/RenPy/Other games, or any situation where 1200x resolution is fine.
You Have to Actually Save Outputs: This one is relatively minor, but it's still a part of the bad news. I haven't yet found a (reliable) way to automatically send outputs to Google drive or some other storage space - so you have to actually save the images you like to your hard drive. I've added image browser and some bulk downloading extensions to assist with that - but you do have to save things. NOTE: And if the system crashes (which doesn't happen a lot, but can happen from time to time - especially if you're messing around with things outside of your normal workflow) the files are lost - so you'll want to save the good stuff often.
No Training, No Merging: There just isn't enough processing power here to make those work effectively. I DID actually get a training session to run without crashing and it did produce a result - but it was not a good result.
Saving Configs, Styles, Etc. Takes a little extra work: Since I haven't found a reliable way to mirror saves and various other files (in real time, anyway) when you save your config files, styles, etc - they save on the installed instance, but not in your folder. The good news is that I'll provide quick links to the major files so you can save the settings, grab the file, and re-upload or pasted the updated content into the master file on your HF Space. (More on this later).
The Good News
Under most conditions you can get pretty good results. Some models - especially those with lots of detail going on or ones that are overbaked - can give you lots of problems, but most General Purpose Checkpoints and LoRas work just fine. As mentioned above, the paid GPU time is better - but in a lot of cases, the money you pay for that doesn't outweigh any quality differences. This is especially true once you home in on and master some helpful tokens that work well with each of your specific models.
Many Extensions Work and (and many useful ones are Already Installed): Unless the extension requires GPU or uses too much processing power to be handled by the CPU, the extensions should work. You'll need to edit a file to add a new extension that you want to reload every time you restart, but you can use the extension manager to automatically install and test an extension before doing that. (I'll put more info into a later article in this series on that).
You can grab Checkpoints, Embeddings, LoRas, Lycoris, and all the other models from all sorts of places - both permanently (so it loads every time) by editing a startup file, or for a one off use by using the Civitai Helper and/Or Filer extensions. (Filer will grab from just about anyway - the Civitai helper will grab from here, plus get the image and other info all set up for you).
It's Always Running: With the exception of crashes or not using it for a few days, your space is always running. With many other customizable remote installs, you've got to wait as long as 10-15 minutes for it to boot up and install every time. With this, when an idea hits, open up a tab and put in your prompt, or drag an image into PNG Info to see if there is generation info, or whatever else happens to suit your whimsy.
Full Features, Customize it How You Like It: This, to me, is the main attraction. There are tons of online AI art generation services - many offering free credits to use them. But to really get in there and see the potential of what it can do, you've got to pay up or (in many cases) find another service that isn't keyed in to one specific set of tools for you to use. With the exception of things that absolutely require a GPU (such as training, animations, certain extensions like the BG Remover, and so on) you're free to (fairly easily) download, install, and try out everything out there. (And a lot of the cool stuff that I use a lot is already installed and ready to go for you).
Who Would Want To Use This?
That's a good question. This probably isn't the best choice for any production department - though I suppose certain tools could be handy to have at your fingertips. Let's take a look at the reasons I started to use it and maybe you can find some of the same reasons or reasons of your own.
Seeing What It's All About - For Free: Sure, there are some free services out there with free (often daily) credits - but on most of them you can't just grab any model or LoRa or controlnet model and "kick the tires". With Google cracking down on Colab usage - the whole "unlimited choices" and "free" just don't come together.
Concept Testing: I work from home and, quite often, I'll be plugging away at something and I'll wonder, I wonder if Stable Diffusion understands that concept (or person, or artist, or other term). With this, I can hop over to my HF tab, plug in a prompt, and go back to work. In 15 minutes or so, I'll have a pair of images run and can see if (and how) that concept played into the picture.
Weight Mixing: When playing with different LoRas or Embeddings that I download, it can often take quite a bit of experimentation to get everything playing nicely together. That LoRa that makes cool Hipster Outfits works great by itself, but as soon as I try to mix in that awesome style LoRa at the same time - it all goes to crap. Now, without burning my free credits or paying for GPU time, I can run comparisons - over the course of the day or X/Y/Z grid it and let it run to find the right balance. It may need a smidge more tweaking once I take it to my GPU Production system - but the values I get from this give me excellent starting points and save me as many as 10-20 (paid) test renders just to lock in on a good place to start.
Merging: While actual "training" of models would be impractical - you can merge and mix models with relative ease. Got a great mix of LoRas that create a balanced recipe that you want to be able to use over and over again - use the (already installed) super merger and combine your LoRas at the specified weights. Then you can use that new LoRa to get that style/effect every time. That LoRa can even be merged into Checkpoints to teach it that style or to make it render more closely to that style by default.
No Budget Game Assets: While super clear realistic photography or highly defined fine art are definitely off the table, here - No GPU renders at low resolution that we might be looking to use as character busts in RPG Maker, RenPy Visual Novels, etc. could easily be accomplished. Even with the comparatively long generation times, it's still no longer than waiting for a DAZ IRAY render.
Read on to Setup and Install Your own HF Space.