„VR we are“, an immersive productivity and creativity software designed to bring the familiar multimedia content of videos and images into the world of a Virtual Reality (VR) environment, using Artificial Intelligence (AI) models on the local computer, and most of the processing can be done offline.
Key feature is conversion of 2D images and videos into full stereo side-by-side (SBS) left-right (LR), which can be visualized with viewers in VR headsets or glasses on 3D-capable TV displays.
Some other features can be used even without targeting a VR device, such as capture meta data, upscaling, frame interpolation, various ffmpeg tasks, dubbing or creating slideshow videos from images.
With version 3.0 a user manual (PDF) has been shipped. it is also included in the zip archive under the docs folder. For details go there or ask a question here.
Short overview
The following picture illustrates the building blocks of „VR we are“:
„VR we are“ is using other software as foundation:
ComfyUI is building foundational open source software for the visual AI space.
„VR we are“ uses it as distribution and execution platformStereoscopic is a custom node package for ComfyUI containing the „VR we are“ software. For the custom node I got help from iablunoshka, responsible for high performance of the SBS converter. Our first tests had been made with the nodes of SamSeen.
FFmpeg is a command line tool providing a multimedia framework for video and image manipulation.
Exiftool is a command line tool for reading and editing multimedia meta information.
Google Trans is an optional service, requiring to be online, to translate text (into the own locale).
Topaz Video AI (TVAI) is an optional professional product, when available used. It offers a massive speed and quality boost on scaling and video interpolation (frame rate increase).
Git Bash package, an application for Microsoft Windows environments, which provides an emulation layer for a Git command line experience, required to execute „VR we are“.
„VR we are“ waits for multimedia files to be placed in input funnels (file folders) for processing. Per default the files are processed in a non-linear pipeline, landing in output baskets (file folders). Pipelining though the stages can be customized or even completely turned off.
You can extend the tool by you own demand, since some stages can be created by users, based on predefined blueprints. This is very handy for tasks you have to do over and again, but are just straight forward, and maybe you want to integrate it at some place of your pipeline. For 3.0 two video-to-video blueprints exsist for simple ffmpeg tasks. For 3.1 comfyui workflow tasks will come.
Stages in 3.0:
(c) 2025 Fortuna Cournot, https://www.3d-gallery.org