Sign In

FIX FP16 Errors SDXL - Lower Memory use! --- sdxl-vae-fp16-fix by madebyollin

827
21.9k
962k
176
Updated: Oct 6, 2024
base modelvaebasesdxl
Verified:
SafeTensor
Type
VAE
Stats
21,947
961,590
Reviews
Published
Jan 12, 2024
Base Model
SDXL 1.0
Hash
AutoV2
235745AF8D

"As good as SDXL VAE but runs twice as fast and uses significantly less memory." https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/discussions/7

"Same license on stable-diffusion-xl-base-1.0

same vae license on sdxl-vae-fp16-fix

Troubleshoot:

Do not use the refiner with VAE built in
Try launch param: --medvram --opt-split-attention --xformers

SDXL-VAE-FP16-Fix is the [SDXL VAE](https://huggingface.co/stabilityai/sdxl-vae, but modified to run in fp16 precision without generating NaNs.

Details:

SDXL-VAE generates NaNs in fp16 because the internal activation values are too big:

SDXL-VAE-FP16-Fix was created by finetuning the SDXL-VAE to:

  1. keep the final output the same, but

  2. make the internal activation values smaller, by

  3. scaling down weights and biases within the network

There are slight discrepancies between the output of SDXL-VAE-FP16-Fix and SDXL-VAE, but the decoded images should be close enough for most purposes." - bdsqlsz


NOT MY WORK - REUPLOADED HERE FOR EASE OF USE


COMMISSIONS NOW ACCEPTED!

I have been away saving to upgrade my pc, please help fund more work:

https://www.patreon.com/nucleardiffusion

https://ko-fi.com/nucleardiffusion