Stable Diffusion 3 (SD3) has been released, but it appears that the issue with generating realistic hands is still present. In a test consisting of around 20 images, the hands were poorly rendered in every single one. While the model seems to adhere to the given prompts relatively well, the quality of the generated hands remains subpar.
In terms of text generation, SD3 doesn't seem to offer significant improvements over its predecessor, SDXL. The text quality in the generated images is comparable to SDXL, with no notable enhancements observed in the tests conducted thus far.
Another aspect that requires attention is the overall image quality. Without applying upscaling techniques, the generated images appear to be of poor quality. The example image provided in this post was not subjected to any upscaling processes, highlighting the need for further refinement in the model's output.
In conclusion, while SD3 shows promise in certain areas, such as prompt adherence, there are still significant issues that need to be addressed. The generation of realistic hands and the overall image quality without upscaling remain major challenges for this iteration of the model. Further improvements and optimizations are necessary to enhance the performance of SD3 and deliver more visually appealing results.If you have any tips or workarounds for improving SD3's results, share them please. I'm sure things will improve greatly over time as fine-tuning from the community is made available.