ghost
Sign In

Base Set of Models for ONNX

46
1.2k
7
Type
Checkpoint Merge
Stats
419
Reviews
Published
Aug 9, 2023
Base Model
SD 1.5
Hash
AutoV2
B46E8EA16F
default creator card background decoration
NeusZ's Avatar
NeusZ

Due to the absence of ONNX models in Civitai I upload this one for everyone that wants to test this setup.

The main model is a merge, that I do not remember where it came from...but it's based on SD1.5 models.

All images have been generated without later editing, only Conan & Red Sonja got additional steps for its creation: character independent creation, sum of latents and restore its faces..) . No Loras, No textual inversion or other adds & tools.

I've uploaded three versions:

  • Not Optimized: consumes more VRAM memory but it load the model faster(10s), recommended for testing. i.e.: 640x640 in 4Gb card.

  • Mem Optimized: less VRAM consumption, but higher initial loading time (up to 1min per initial model load). i.e: with 4gb you might reach 1024x768 with no issue

  • Latent Consistency : needs low steps (6-12) and each step is 40% to 50% faster than a normal steps, consumes a bit more memory than a Not Optimized Model. This model is a fp16 replica of "SimianLuo/LCM_Dreamshaper_v7" and not an own merge.

Also, if you want to test them, you might check at my UI:

https://github.com/NeusZimmer/ONNX-ModularUI-StableDiffusion/

Feedback will be much appreciated.

**LCM: a model following the full description as its papers, those with LCM-LORAS weights inside were already working

As SD in ONNX requires a directory structure to work, I've uploaded the main models as zip files, but Civitai only allows me to mark them as training data: Those are the main models.

PD: Working for SD1.x and SD2.x models, but not SDXL as I'm not able to convert SDXL models currently to onnx-fp16.

I've Uploaded:

3xMain Base modelx

VAE encoder & Decoder

TextEncoder (clip-slip1)

Pending to upload: (but available on huggingface) ControlNet base models and Danbooru tagging, TextEncoder for clip-slip 2,3 and 4