Sign In

Aerial (Animation and img2img)

72
283
3
Verified:
PickleTensor
Type
Checkpoint Trained
Stats
283
655
Reviews
Published
Mar 18, 2023
Base Model
SD 1.5
Trigger Words
aerialstyle
Hash
AutoV2
115F259377

Did you know that there was no graphic design, from kinetics to y2k, in the LAION dataset of Stable Diffusion? Yet people are still finetuning portraits and hentai generation!

This is the first SD model trained specifically for motion design and music video production. Here's an example:

You can check this breakdown if you're interested to see more details:

The model is very capable in high budget MV production, posters, album covers, etc. Still, I thought it would be right to publish all the materials in public domain, as a tribute to the open source community of Stable Diffusion

Feel free to clarify the workflow stuff:)

Aerial v1.0

  • Specialized on Deforum and img2img stylization

  • Trained on graphic design and art by me

  • Free to use, no licensing or credits required, forever

Use the word "aerialstyle" to specifically trigger the style of the model

I don't recommend using the model in txt2img because of the abstract nature of the model. nonetheless, below are examples of images generated in txt2img. To get really interesting results though, you can generate a base image with the other model and give the image a style with Aerial, or just use it with ControlNet. This model is really good with CNs:)