Type | |
Stats | 212 2,052 |
Reviews | (29) |
Published | Jun 23, 2024 |
Base Model | |
Training | Steps: 8,160 Epochs: 10 |
Usage Tips | Clip Skip: 2 Strength: 1 |
Hash | AutoV2 93503AC96C |
Virtual Diffusion LoRA Model Card
Model Overview
Name: Virtual Diffusion
Activation Tag: None
Data: Self-made data, sourced data, Nijijjourney 3D, synthetic data
Purpose: Enhance visual content with a theoretical approach (well, in theory lol - we don't remember what it does because we used a weird approach to it lol)
Info: It seems to be that it assumes "solo, realistic" are it's keywords, becuse the more CGI looks come when you use it at higher strengths. It's interesting because the concept is similar to the NijiCGI one, but in that it DOES pick up on solo/realistic tags ( i dont know why lmao) I may have accidentally left keep tokens on when training.
Use case: use on it's own OR in conjunction with another lora to add 3d semi realistic and 2.5d type styles to your image.
Training Information
Output Name: Virtual_3d_Diffusion_Update
Model Name: ponyDiffusionV6XL.safetensors
Network Module: networks.lora
Optimizer: Prodigy (decouple=True, weight_decay=0.01, betas=[0.9, 0.999], d_coef=2, use_bias_correction=True, safeguard_warmup=True)
Learning Rate Scheduler: Cosine with restarts
Network Dimensions: 16
Network Alpha: 16
Epochs: 10
Steps: 8160
Learning Rate: 0.75
Text Encoder Learning Rate: 0.75
UNet Learning Rate: 0.75
Noise Offset: 0.0357
Minimum SNR Gamma: 8
Training Started: June 8, 2024, 04:38 UTC
Training Finished: June 8, 2024, 09:21 UTC
Training Time: 4 hours 42 minutes 54 seconds
Model Description
Virtual Diffusion is a unique model utilizing a mix of self-made data, curated content from Flickr and Pinterest, Nijijjourney 3D resources, and synthetic data. It is designed to enhance visual content theoretically, employing innovative techniques to refine and enrich images with a virtual aesthetic inspired by Second Life and similar virtual environments.
Usage Notes
This model has undergone minimal testing and primarily serves to explore theoretical enhancements in visual content. It operates with a focus on generating images that evoke a virtual and imaginative quality, influenced by its training on diverse datasets encompassing digital art, 3D renders, and virtual world aesthetics.
About the Creator
The Virtual Diffusion model is developed by the Duskfall Portal Crew, a DID system with over 300 alters, navigating life with DID, ADHD, Autism, and CPTSD. They explore the potential of AI to break down barriers and enhance mental health through creative expression and identity exploration.
Ethical Use and Licensing
Users should be mindful of ethical considerations when using the Virtual Diffusion model, respecting intellectual property rights and promoting inclusivity in creative expressions.
Join Our Community
Website: End Media
Discord: Join our Discord
Backups: Hugging Face
Support Us: Send a Pizza
Community Groups:
DeviantArt Group: DeviantArt Group
Subreddit: Reddit
Embeddings to Improve Quality
Negative Embeddings: Use scenario-specific embeddings to refine outputs.
Positive Embeddings: Enhance image quality with these embeddings.
Extensions
ADetailer: ADetailer GitHub
Usage: Use this extension to enhance and refine images, but use sparingly to avoid over-processing with SDXL.
Batchlinks: Batchlinks for A1111
Description: Manage multiple links when running A1111 locally or on a server.
Addon: @nocrypt Addon
Additional Extensions: