Sign In

Velocium

Loading Images

Connection and isolation; each can be a prison or an escape, both are needed to understand one another.

This project was derived from a 2023 journal entry and made for the Project Odyssey season 2 competition.

Thank you to CivitAI and competition sponsors for making these tools accessible and enabling independent artists.

The most interesting part of this journey has been reckoning with the feelings that inspired the story long after they have changed. In making something that originated from a personal place, It's tempting to identify with the art and to put the story on the shelf as I don't feel it fully reflects something I would currently write. However the soul of the project, and its ultimate value to me, does not come from the narrative itself, but from seeing it retrospectively as a feeling that was once present that is now a story.

Check out the Lora that was trained on the final imagery for the film

VLCM (Velocium) Lora

PROCESS

-This film was derived from an excerpt from a 2023 journal entry written with pen and paper.

-Image generation was done using the FludDev base model via comfy UI with tweaks in Adobe Photoshop.

-100 shots were selected out of thousands of generated images.

-Videos were animated with Kling 1.6 i2v, Kling 1.5 Start-frame-end-frame, Luma Dream Machine i2v & start-frame-end-frmae, Haliou Img2V, Luma nerf 3D scans and camera moves, Runway ML, and Animatediff with IPadapter and Controlnets.

-Texture and grain added with Dehancer-Pro in Davinci Resolve.

-Voice-over created with ElevenLabs

-Images created at 540p, upscaled to 2k, and downsampled to 720 during the animation process. The final outputs were left at 720 because I felt the images of this story looked best with a little impressionistic softness, it helps wash away the uncanny nature of AI for this particular aesthetic.

~

-A mix of open source and custom-trained loras were used in the model pipeline.

-The highest volume of training data used to make the images is encoded into the FLuxDev base model architecture, which samples data from billions of images and parameters.

-The strongest influencing training data used to make the images is encoded into Loras, both open source and custom training, with various network dimensions, learning rates, and sampling steps.

-The Loras in the pipeline included training data from past and modern animation, live-action, photography, concept art, and experimental imagery.

-Some of the Loras were trained with generated images created with a separate model pipeline.

-The node trees, software, pipelines, and tools used in the generative image process are based on open-source tools, built by those who were willing to give away what they created for free, using repositories like Civit, Huggingface, and GitHub.

The act of conceptualizing, tuning, curating, and editing this film is based on my time in independent and commercial production, photographing, sketching, drawing, painting, reading, telling stories, and observing the world.