Introducing RunDiffusion XL Beta!
RunDiffusion.com has done it again! One of the worlds first SDXL Models!
Join our 15k Member Discord where we help you with your projects, talk about best practices, post exciting updates and news, and share the amazing art we generate.
Why? Isn't SDXL already amazing?
Yes! We feel this is a step up! SDXL has an issue with people still looking plastic, eyes, hands, and extra limbs. We couldn't solve all the problem (hence the beta), but we're close! We tested hundreds of SDXL prompts straight from Civitai.com (using ComfyUI) to make sure the pipelines were identical and found that this model did produce better images! See for yourself here.
The model tends to favor closeups or "headshots" more than full body unless you explicitly prompt for that. Not sure if that is a desired feature. Open for feedback on that.
More photorealistic for animals, cars, and people.
Different Model, different generations. Try it with your favorite prompt!
Better portraits. SDXL likes to put the subject way far out. This brings things closer.
Eyes and hands are a bit better (from our testing)
Better prompt control. Make sure to use commas and break apart subjects and styles!
Still a few concepts that are "plastic" (Food, but please test your prompts and let us know!)
Can sometimes be too "person bias"
Creativity is a little bit hindered. This could be considered a "Photorealistic" model. But again, TEST!
Bias towards "sports cars" lol
Notice the RDXL follows the prompt a bit better. When asking for "upper body" it tends to follow that.
How to Use
Prompt as usual. Use commas to separate subjects and styles. If you aren't getting what you want, use the token again. e.g.
iron man made of lava, suit of armor infused with molten lava, ...etcif you don't get Iron Man in your generation, then put that token somewhere else, `
iron man made of lava, suit of armor infused with molten lava, marvel iron man super hero ...etc`
Use the standard refiner or the 1.0-0.9 refiner. https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/sd_xl_refiner_1.0_0.9vae.safetensors
We used a small dataset of 700 images and trained for 14k steps at 20 epochs. Manually captioning everything to be as descriptive as possible. It turns out that SDXL is pretty receptive to training! The changes we were able to make prove to be extremely exciting. The future is bright for SDXL!
We want to make generations easy, so we're bringing ComfyUI to the platform so you can simply drop in images with metadata to generate with one click. This is the future of workflow sharing and we're very grateful to /u/comfyanonymous for his help with getting this working on our platform. Also a special thanks to /u/Searge for his amazing work on SeargeSDXL. We love that workflow! (Could we work on getting the generation data to work on Civitai.com?)
We will continue to work on this model from the feedback we receive. Get ready for a full release in the coming weeks.