Current VR video technology uses two stationary cameras with wide-angle fisheye lenses to capture stereo video. Projecting the recorded images onto a VR headset display comes with several challenges, which HereSphere seeks to overcome.
First, the fisheye lenses produce optical distortions. These distortions are typically handled in post-production by converting the fisheye images to an equirectangular format. However, the wrong lens profile may be applied during the conversion process, leading to unwanted distortions in the final image. HereSphere handles this issue with its wide range of lens profiles with adjustable FOV's, which it can convert between lens profiles. It even has the option to use custom lens distortion curves, allowing it to handle any raw fisheye lens or equirectangular conversion.
Second, the positions of the stereo cameras are locked in at the time of recording. To properly perceive depth and scale, the human visual system depends on the convergence of our eyes (the angle at which our eyes need to rotate to focus on an object). If the distance between the stereo cameras doesn't match the distance between the eyes of the viewer, then the convergence angle required to focus on an object will not match reality for the viewer. In addition, if the viewer rotates or moves their head, their eyes will no longer be in alignment with the position and orientation of the stationary cameras. This leads to double-vision, eyestrain, and an inaccurate perception of depth and scale. HereSphere solves this issue by using a mathematical model with real-time depth detection to adjust the projection to match different IPD's, head orientations, and even tracks the viewer's position to emulate motion with 6DOF.
Third, there will be errors in the alignment and orientation of the stereo cameras relative to each other, as well as manufacturing errors in the placement of the camera photosensors, and post-production stitching errors. Our eyes can be incredibly perceptive, so a fraction of a degree off in rotational alignment, or a fraction of a millimeter off in positional alignment, or a few pixels off during the stitching process, can all lead to eyestrain, depth perception, and scale issues. HereSphere uses a comprehensive list of stitching and alignment settings to correct these errors. The depth detection algorithm outputs distance readings and can be used with an anaglyph view mode that allows the viewer to easily see and adjust the images to be perfectly aligned with convincing depth.
When all of these issues are handled, the resulting image becomes incredibly clear, free from eye-strain, and the sense of immersion is greatly enhanced.