Type | |
Stats | 1,438 211 |
Reviews | (253) |
Published | Aug 2, 2023 |
Base Model | |
Training | Steps: 3,900 |
Usage Tips | Clip Skip: 1 |
Trigger Words | yumeko jabami |
Hash | AutoV2 C8E6CC1517 |
I'm back ^.^
P.P.S. new model ogipote (2.5D) and ogipote (2.5D)_v2.0 on Boosty
I made a video on how to achieve the same quality or just repeat my art
P.S. If you want to support me, then you can subscribe to my Patreon and get builds earlier than they come out here (versions on patreon will always be newer by one than here) This will help me a lot in updating the video card over time to make models even better and faster (the main thing is better)
Eta noise seed delta 31337
LORA weight 0.5-0.6
Use Abyssorangemix3aom3_aom3a3.safetensors With these negative prompts:
(worst quality, low quality, extra digits:1.4) or (worst quality, low quality:1.4)
Trigger Words: yumeko jabami
just use hires fix as much as your video card is capable of, so as not to give an error, with 8gb of video memory it is 512x768 with hires fix 2-2.15, upscaler Latent (nearest-exact), the picture will be much more detailed. If you want a style close to the original but with more detail, don't use Denoising strength above 0.6, but to avoid artifacts, do not use too low a value, optimally 0.5
CFG Scale - 5-7
The training took place at a resolution of 512x768, so it makes no sense to use 512x512
Combined with my other LORA
How to use LoRA's in auto1111:
Update webui (use
git pull
like here or redownload it)Copy the file to
stable-diffusion-webui/models/lora
Select your LoRA like in this video
Make sure to change the weight (by default it's
:1
which is usually too high)
*Information taken from Lykon
Please post your work with or without comments, it will help me improve. Thanks!
If you like my work, click on the heart above, I will be pleased :3