Sign In

How to make an image HD without changing its size.

How to make an image HD without changing its size.

我不会英文,以下译文来自chatGPT

I don't know English, the following translation is from chatGPT.If there are any shortcomings, please forgive me

前言:

用更低的成本获得更好的结果一直是我最求的方向,很高兴现在有了一定的成果,可以分享出来,让所有人受益,当然,最主要还是SD1.5的用户。熟悉我的人都知道,我一直在努力提升SD1.5的图像品质。在SDXL刚刚问世的时候,我就尝试采集SDXL的高清图像来训练lora,以帮助SD1.5提升图像品质,并将其命名为SDXLrender_v1.0,事实证明,这个方法确实有效 ((( 这里是链接 ))) ,我也分享了我是如何训练这个lora的文章,如果你有兴趣,可以查看这里 ((( 这里是链接 ))) ,而现在我又尝试了一些新的训练方法(有人称它为“复印炼丹法”,如果有人需要我会再写一篇文章介绍如果训练。)我称它为SDXLrender_v2.0,它的效果如此惊人!非常像最近流行的magnific.ai中的upscale,这让我迫不及待的想要分享它。((( 这里是该lora)))

Preface

Achieving better results at a lower cost has always been my goal, and I'm excited to now have tangible results to share, benefiting everyone, especially users of SD1.5. Those familiar with me know that I've been tirelessly working to enhance the image quality of SD1.5. When SDXL was first introduced, I attempted to collect SDXL's high-definition images to train a lora, aiming to improve the image quality of SD1.5, and named it SDXLrender_v1.0. This method has proven to be effective ((( link here ))) and I have shared an article on how I trained this lora, which you can check out here ((( link here ))). Now, I've tried some new training methods (some call it the 'photocopy alchemy method,' and I can write another article to introduce how to train if needed). I call it SDXLrender_v2.0, and its effectiveness is astounding! It's very similar to the popular upscale feature in magnific.ai, making me eager to share it. ((( link to the lora here )))

功能介绍:

首先,在不改变图像尺寸的前提下,使用SDXLrender_v2.0可以让图像变的高清,没错,你不再需要使用(Hires. fix)或者其他SD upscale的工具,就可以获得高清的画质。

Feature Introduction

Firstly, without altering the size of the image, using SDXLrender_v2.0 can make the image high-definition. There's no longer a need to use (Hires. fix) or any other SD upscale tools to achieve high-quality visuals.

甚至可以提升LCM的图像品质。感谢 用户@949491917901,是他测试的这个功能。

It can even enhance the image quality of LCM. Special thanks to user @949491917901, who tested this feature

其次,你不用担心SDXLrender_v2.0对画面造成破坏

通常来说,在提示词中增加一个lora,即使你锁定seed值,图像也一定会发生改变,比如构图、人物姿态或者衣着,尤其是在我们已经生成了一张满意的图像的时候,这种改变是致命的。

而SDXLrender_v2.0会非常忠实的还原图像原本的构图、人物姿态甚至衣物。

Secondly, you don't need to worry about SDXLrender_v2.0 causing damage to the image. Typically, adding a lora to the prompt, even if you lock the seed value, will inevitably change the image, such as its composition, character poses, or clothing. This change can be critical, especially when we've already generated an image we're satisfied with. However, SDXLrender_v2.0 faithfully reproduces the original composition, character poses, and even clothing of the image.

使用方法:

1. 你只需要想往常一样使用tex2img,一旦发现想要的图像,只需在正向提示词中增加SDXLrender_v2.0(不需要任何触发词),锁定seed值再次渲染,即可收获高质量的图像

2.在img2img中直接增加SDXLrender_v2.0

3.如果你需要更大的图像尺寸必须使用(Hires. fix),也不必担心,SDXLrender_v2.0同样可以非常好的工作,比只使用(Hires. fix)效果更好。

How to use it:

  1. Just use tex2img as usual. Once you find the image you want, simply add SDXLrender_v2.0 to the positive prompts (no trigger words needed), lock the seed value, and re-render to obtain a high-quality image.

  2. Directly add SDXLrender_v2.0 in img2img.

  3. If you need a larger image size and must use (Hires. fix), don't worry. SDXLrender_v2.0 can still work very well, offering better results than using (Hires. fix) alone.

注意事项:

我使用的采样方法 (Sampler):DPM++ 2M SDE Karras

(但这些其实并不重要,它可以跟所有采样方法合作,甚至LCM)

我使用的lora权重:0.3~1.2

我的提示词引导系数 (CFG Scale):3~6

Points to note:

My sampling method (Sampler): DPM++ 2M SDE Karras

(But this isn't actually important, as it can work with all sampling methods, even LCM.)

My lora weights: 0.3~1.2

My prompt word guidance coefficient (CFG Scale): 3~6

最后的话:

最近越来越多的人问我,为什么一直在训练SD1.5的模型而不去做SDXL的,这里我统一回答一下,我个人感觉SDXL 有非常大的潜力,但它也对计算机性能的要求也更高。不是每个人都能负担得起昂贵的高端显卡,我身边就有很多这样的人。我相信人工智能技术应该更好地服务于大众,希望更多的人体验到人工智能的魅力。因此,选择SD1.5显然是比SDXL更好的选择。其次,SDXL的训练成本较高,试错成本也较高。如果我的一部作品非常受欢迎,每个人都想要 SDXL 版本,我可能会考虑训练一个 SDXL 模型。但在这一切发生之前,我可能更愿意专注于探索SD1.5的极限,因为每个人的时间和精力都是有限的。再次感谢您们的关注。希望我的回答能让您们满意,也希望您们能在我的作品中找到乐趣。和平。

Final Words:

Recently, more and more people have been asking me why I keep training models for SD1.5 instead of moving to SDXL. Here's my unified answer: I personally feel that SDXL has tremendous potential, but it also demands higher computer performance. Not everyone can afford expensive high-end graphics cards, and many around me fall into this category. I believe AI technology should serve the masses better, enabling more people to experience the charm of AI. Therefore, choosing SD1.5 is a better option than SDXL. Additionally, the training cost for SDXL is high, and so is the cost of trial and error. If one of my works becomes extremely popular and everyone wants an SDXL version, I might consider training an SDXL model. But until that happens, I'd rather focus on exploring the limits of SD1.5, as everyone's time and energy are limited. Thank you again for your interest. I hope my answer satisfies you, and that you find joy in my works. Peace.

希望这篇分享对你有所帮助,hope you have fun!

You can view more content sharing on my homepage →

65

Comments