• Apps
  • Software IPD adjustment Pros and Cons

Just posting our internal discussion here.

Me: >Can you guys make sense of software IPD correction?

I can get what’s the point for physical lense correction on quest. What would be the logic for software one? Why Quest won’t offer this feature, but some players do?

CV engineer:

In general - the percieved distance depends on the angular difference to the point and person's IPD. In case of generated on fly scene rendering it is possible to place virtual camera in any desirable point, so if there is a scene, that is possible to render it for any IPD (and Unity, for example, have such possibility). But in case of videostreaming there is no scene that represent real objects position - only spheres to project video to.
In case if If the persons IPD is the same as objectives distance (IOD), it percieved normally.
In case if person's IPD higher than objectives distance, any point will be percieved more far than in reality. But the angular dimensions of object will remain the same, so the brain interpret it as a "far giant object". So some persons with high IPD compared to objective distance will think that objects are giant.
If IPD is less, then objective distances, the objects will seem small.
In order to fix that it is possible to rotate the projection spheres. In general - information about real depth and IPD is required for accurate correction, but practically a fixed angle may work OK (a bit more complex not to mess up with sides, where there is no stereoscopy at all).
That kind of corrections is only required only for stereoscopyc content collected by photo/video , for generated one there is no such problem, so Quest is not offer that. Not sure about if eye tracking can help in that particular problem

CTO:

Software IPD will help with UI elements perceived size for different people. Don't think that we need to adjust it.

Unity:

I think that maybe software IPD will ease a little bit of eye strain when people have IPD that is bigger or lower than what Oculus providers 58/63/68. For me, when I have it on first setting (58 mm) I get a little discomfort watching videos. There is also slight "hack" that you can get IPD 65 or 60 with Oculus hardware correction (move the lenses so the numbers are in between). Also a huge thread where people beg Oculus to add software IPD (no response from Oculus): https://forums.oculusvr.com/t5/Ideas/Software-IPD-Adjust-on-Quest-2/idi-p/909704

    CV2:

    This type of software IPD mentioned in the above article is different from you're thinking of IPD. In an Occulus scenes are being rendered in real-time so in theory someone could work out where the eyes are really located with respect to the screens and project. You could probably even work out chromatic abrasion by predicting how it would render from the lenses, and generate images with RGB shifted to take that into account. It doesn't really apply in our case because the videos that we're rendering are fixed when the scenes were recorded.

    There's also the IPD setting that Alex is using for the focusing solution. That just defines the geometry used to calculate how far your eyes turn inwards when looking at close-by objects. It's important for focusing, but it isn't the same IPD that this article is talking about.

    Lastly with depth map reprojecting you could rerender videos with a different IPD from how the video was originally recorded but it will probably have artifacts and occasional holes.

    Sorry, this is not related to the post but may I ask if you guys are still hiring for script maker for any pornsites videos?

      doublevr Maybe I'm stating the obvious here (or maybe I'm wrong), but I get the feeling you're talking about two different things. The CV engineer guy seems to talk about correction because the video was shot with an IPD that doesn't match the viewer's eye distance, but you and the Unity guy seems to talk about correction because the headset lens distance doesn't match the viewer's eye distance. I believe they are very different problems? I think it's only the former that affects percieved scale, for example. The two headsets I own (Oculus Rift and HP Reverb G2) both have sliders for lens distance, and while they do change how stuff looks, it does absolutely nothing to change the percieved scale of a scene.

      I'm worried. Is this something the average user around here needs to understand to fully enjoy SLR? In other words, am I missing something cool because I don't know what the heck you're talking about?

      If not, all you tech nerds enjoy yourselves and carry on.

      But if I'm missing out on some cool feature, please explain.

      thank you.

        My take on software IPD with the Rift S:
        I don't notice any difference with the software IPD changes on the oculus app.

        Am I doing it wrong somehow?

        From the HeresphereVR product description on Steam:

        Current VR video technology uses two stationary cameras with wide-angle fisheye lenses to capture stereo video. Projecting the recorded images onto a VR headset display comes with several challenges, which HereSphere seeks to overcome.

        First, the fisheye lenses produce optical distortions. These distortions are typically handled in post-production by converting the fisheye images to an equirectangular format. However, the wrong lens profile may be applied during the conversion process, leading to unwanted distortions in the final image. HereSphere handles this issue with its wide range of lens profiles with adjustable FOV's, which it can convert between lens profiles. It even has the option to use custom lens distortion curves, allowing it to handle any raw fisheye lens or equirectangular conversion.

        Second, the positions of the stereo cameras are locked in at the time of recording. To properly perceive depth and scale, the human visual system depends on the convergence of our eyes (the angle at which our eyes need to rotate to focus on an object). If the distance between the stereo cameras doesn't match the distance between the eyes of the viewer, then the convergence angle required to focus on an object will not match reality for the viewer. In addition, if the viewer rotates or moves their head, their eyes will no longer be in alignment with the position and orientation of the stationary cameras. This leads to double-vision, eyestrain, and an inaccurate perception of depth and scale. HereSphere solves this issue by using a mathematical model with real-time depth detection to adjust the projection to match different IPD's, head orientations, and even tracks the viewer's position to emulate motion with 6DOF.

        Third, there will be errors in the alignment and orientation of the stereo cameras relative to each other, as well as manufacturing errors in the placement of the camera photosensors, and post-production stitching errors. Our eyes can be incredibly perceptive, so a fraction of a degree off in rotational alignment, or a fraction of a millimeter off in positional alignment, or a few pixels off during the stitching process, can all lead to eyestrain, depth perception, and scale issues. HereSphere uses a comprehensive list of stitching and alignment settings to correct these errors. The depth detection algorithm outputs distance readings and can be used with an anaglyph view mode that allows the viewer to easily see and adjust the images to be perfectly aligned with convincing depth.

        When all of these issues are handled, the resulting image becomes incredibly clear, free from eye-strain, and the sense of immersion is greatly enhanced.

        Maybe that's the reason why your 8k scenes look pretty good to me with my setup...