ComfyUI
GPU: RTX3060
Run Parameter:
python ./main.py --preview-method none --lowvram
(VRAM: 10GB consumed with just one batch)
Don't mess with sampler unless you know what are you doing, because I'm not lol
Refiner is the same model
Work nice for gecko eye, I have not test with other prompts yet
If you realize what make this work, tell me in the comments, because I just tried to find RAM friendly refiner.
Not tested with other models, but should work similar.
Edit as you wish, and tell me if you was able to improve more.
Seems like the price for less RAM is more VRAM use.
The process is slow, but effective
Normal Generated:
Refiner Generated: