You shot a video. The content is good. But the camera angle is wrong, too static, too flat, or just not the shot you needed.
Reshoot isn't an option. This workflow is.
Existing video in. Same scene, new camera angle out.
Run it now on Floyo!

Why This Workflow Is Different
Most video editing tools can crop, stabilize, or reframe. None of them can actually move the camera to a new position that wasn't in the original footage.
RecamMaster can. It understands the 3D structure of your scene and synthesizes what the video would look like from a completely different camera position. Wan2.1 handles the video generation preserving content, motion, and detail while applying the new camera path.
no reshooting required
scene content and motion preserved throughout
camera type selectable from a dropdown no manual keyframing
generates in about 4 minutes
How It Works
Wan2.1 is a video generation model built for video-to-video transformation. It processes your input video frame by frame, maintaining temporal consistency while applying changes guided by the camera control.
RecamMaster provides the camera control layer. It analyzes the depth and structure of your input video, then re-renders it from a new virtual camera position. The ReCamMasterDefaultCamera and CameraEmbed nodes inject the new camera path directly into the generation process.
detailz-wan LoRA runs alongside to preserve fine detail and sharpness across the transformed frames.
Together they give you:
accurate perspective shift based on scene geometry
consistent subject identity and motion across frames
cinematic camera movements without manual animation
Key Inputs
Your Video
Upload any MP4. The cleaner and more stable the source footage, the better the camera transformation result.
Works well with:
scenes with clear depth and spatial structure
talking head or portrait footage
product or environment shots
AI-generated video clips you want to reframe
Works less well with:
very fast-moving or highly chaotic footage
extremely low-resolution or heavily compressed source video
scenes with no clear depth separation between subject and background
Camera Type
Select your camera movement from the dropdown in the Set Camera node. Options include pan, tilt, push-in, pull-out, orbit, and bird's-eye shifts. No manual keyframing needed — pick the movement and run.
Frame Load Cap
How many frames to process in the output. Keep it reasonable for faster generation. More frames = longer processing time.
Skip First Frames
Start the transformation from a specific point in the video timeline. Useful if you want to process a specific segment rather than the full clip.
Select Every Nth Frame
Process every other frame to speed up generation or create a stylized frame rate effect.
What This Is Great For
Film and video production: Fix camera angles in post without reshooting. Turn a static locked-off shot into a slow push-in or orbit. Add cinematic movement to footage that didn't have it.
AI video refinement: Re-render AI-generated clips from a new camera perspective. Change a flat front-facing generation into a more dynamic angle.
Content creation: Reframe existing footage for different aspect ratios and platform formats while adding intentional camera movement.
Previsualization: Test how a scene reads from different camera positions before committing to a real shoot or final render.
What to Watch Out For
Complex fast motion makes transformation harder, RecamMaster reads scene geometry to reposition the camera. Fast, chaotic movement gives it less stable geometry to work from. Slower, more controlled footage produces cleaner results.
Large camera angle shifts in one pass can introduce artifacts. Moving 45° or less per generation produces more reliable results than attempting a full 180° repositioning in one step.
Frame count affects generation time directly. Start with a shorter clip (30–65 frames) to test your camera settings before committing to a full-length transformation.
The depth LoRA (wan2.1-1.3b-control-lora-depth) is what enables accurate perspective reconstruction. It must be loaded correctly or the camera movement will apply without proper spatial understanding.


