Sign In

Velocium

Loading Images

Connection and isolation; each can be a prison or an escape, both are needed to understand one another.

Thank you to CivitAI and competition sponsors for making these tools accessible and enabling independent artists.

When I wrote this in 2023 it was a journal entry that I had no intention of making into a film, it was an emotionally charged vent. When I began to discover the possibilities of creating animation with Machine Learning tools, I thought this would be a good piece of writing to turn into an animated film.

It's about connection and isolation; how each can be a prison or an escape, but that both are needed to have a balanced understanding of the other.

Films are an expansion of language, we stack images and sounds on top of written words to better convey a feeling, yet what is felt is always lost in translation. film is a language that gets us closer to the origin feeling that is the intended expression, yet no language can do this perfectly. I love that AI gives me more access to this language, and I'm proud of this film's attempt to illustrate the complexities I have felt inside of myself.

I love the look of this film, but it's not a demonstration of AI's image capabilities. My approach was a simple 540p retro-anime style, I don't think it's anything aesthetically groundbreaking, I find the value comes from the story. Creating Velocium was a practice of narrative structure told through sequential imagery, without having to jump through the usual hoops of production, which I still love dearly, and hope to gain more access to through creating a body of work with generative AI.

I'm a proponent of open source; more accessible creative tools is a net positive. Creators who work with AI have the unique opportunity to not just make works of art, but to also build technology for others to use and expand upon. In this way, we are reverse gate keeping, and giving opportunities to those who otherwise face barriers to entry.

"The only things we keep are the ones we give away" -Waite Phillips

Check out the distilled LoRA that was trained on the final imagery for the film

VLCM (Velocium) Lora

Credits

Written, Animated, Edited, and Directed by Calvin Herbst @calvin_herbst

Sound Studio: ADY.AUDIO @ady.audio

Sound Design and Mix by Daryn Ady @ydanyrad

Audio Producer Kamilla Azh @kamillazh

Song "Falaise" by Floating Points

PROCESS

-This film was derived from an excerpt of a 2023 journal entry.

-Images were generated using Text to Image with the Black Forest Labs FluxDev base model via comfy UI with tweaks in Adobe Photoshop.

-Images were animated using various Image to Video tools including: Kling 1.6, Kling 1.5 start/end frame, Hailou, Haiper, Runway, and Luma Dream Machine.

-Additional assets were generated from gaussian splats using Luma and processed in ComfyUI with Animatediff, IPadatapers, and Controlnets.

-Texture, Grain, and color were added with Dehancer-Pro in Davini Resolve and Adobe After Effects.

-Voiceover was created with Eleven Labs.

-A mix of open-source and custom-trained LoRAs were used in the image generation model pipeline.

-The highest volume of training data used to create the images was encoded into the FluxDev base model, which samples data from billions of images and parameters.

-The strongest influencing training data used to create the images was encoded into LoRAS.

-The LoRAs in the pipeline included training data from past and modern animation, live-action, photography, concept art, and experimental digital imagery.

-Some of the LoRAS I trained were distilled; trained on generated images via a separate model pipeline.

-The, software, workflows, and custom nodes used in the generative image process were based completely on open source tools, built by those who were willing to give away what they created for free, using repositories like Civitai, Huggingface, and GitHub.