• Apps
  • Software IPD adjustment Pros and Cons

CV2:

This type of software IPD mentioned in the above article is different from you're thinking of IPD. In an Occulus scenes are being rendered in real-time so in theory someone could work out where the eyes are really located with respect to the screens and project. You could probably even work out chromatic abrasion by predicting how it would render from the lenses, and generate images with RGB shifted to take that into account. It doesn't really apply in our case because the videos that we're rendering are fixed when the scenes were recorded.

There's also the IPD setting that Alex is using for the focusing solution. That just defines the geometry used to calculate how far your eyes turn inwards when looking at close-by objects. It's important for focusing, but it isn't the same IPD that this article is talking about.

Lastly with depth map reprojecting you could rerender videos with a different IPD from how the video was originally recorded but it will probably have artifacts and occasional holes.

Sorry, this is not related to the post but may I ask if you guys are still hiring for script maker for any pornsites videos?

    doublevr Maybe I'm stating the obvious here (or maybe I'm wrong), but I get the feeling you're talking about two different things. The CV engineer guy seems to talk about correction because the video was shot with an IPD that doesn't match the viewer's eye distance, but you and the Unity guy seems to talk about correction because the headset lens distance doesn't match the viewer's eye distance. I believe they are very different problems? I think it's only the former that affects percieved scale, for example. The two headsets I own (Oculus Rift and HP Reverb G2) both have sliders for lens distance, and while they do change how stuff looks, it does absolutely nothing to change the percieved scale of a scene.

    I'm worried. Is this something the average user around here needs to understand to fully enjoy SLR? In other words, am I missing something cool because I don't know what the heck you're talking about?

    If not, all you tech nerds enjoy yourselves and carry on.

    But if I'm missing out on some cool feature, please explain.

    thank you.

      My take on software IPD with the Rift S:
      I don't notice any difference with the software IPD changes on the oculus app.

      Am I doing it wrong somehow?

      From the HeresphereVR product description on Steam:

      Current VR video technology uses two stationary cameras with wide-angle fisheye lenses to capture stereo video. Projecting the recorded images onto a VR headset display comes with several challenges, which HereSphere seeks to overcome.

      First, the fisheye lenses produce optical distortions. These distortions are typically handled in post-production by converting the fisheye images to an equirectangular format. However, the wrong lens profile may be applied during the conversion process, leading to unwanted distortions in the final image. HereSphere handles this issue with its wide range of lens profiles with adjustable FOV's, which it can convert between lens profiles. It even has the option to use custom lens distortion curves, allowing it to handle any raw fisheye lens or equirectangular conversion.

      Second, the positions of the stereo cameras are locked in at the time of recording. To properly perceive depth and scale, the human visual system depends on the convergence of our eyes (the angle at which our eyes need to rotate to focus on an object). If the distance between the stereo cameras doesn't match the distance between the eyes of the viewer, then the convergence angle required to focus on an object will not match reality for the viewer. In addition, if the viewer rotates or moves their head, their eyes will no longer be in alignment with the position and orientation of the stationary cameras. This leads to double-vision, eyestrain, and an inaccurate perception of depth and scale. HereSphere solves this issue by using a mathematical model with real-time depth detection to adjust the projection to match different IPD's, head orientations, and even tracks the viewer's position to emulate motion with 6DOF.

      Third, there will be errors in the alignment and orientation of the stereo cameras relative to each other, as well as manufacturing errors in the placement of the camera photosensors, and post-production stitching errors. Our eyes can be incredibly perceptive, so a fraction of a degree off in rotational alignment, or a fraction of a millimeter off in positional alignment, or a few pixels off during the stitching process, can all lead to eyestrain, depth perception, and scale issues. HereSphere uses a comprehensive list of stitching and alignment settings to correct these errors. The depth detection algorithm outputs distance readings and can be used with an anaglyph view mode that allows the viewer to easily see and adjust the images to be perfectly aligned with convincing depth.

      When all of these issues are handled, the resulting image becomes incredibly clear, free from eye-strain, and the sense of immersion is greatly enhanced.

      Maybe that's the reason why your 8k scenes look pretty good to me with my setup...