• SLR
  • [Updated videos] SLR Depthmaps in the making

Smilyy This is because Heresphere gauges the depth of what is in the center of your view. Then based on the lens distortion profile it knows how to uncross your eyes.
This works really well and makes close objects, object to the far left/right perfectly watchable. It also corrects the crosseye effect you get from tilting your head and I noticed that it often makes the scène look sharper / beter focussed overall.
You do get the occasional glitch but this is due to the lack of eye tracking (it estimates the depth based on the center of your view).

The above is what made me switch over to that player too (and that it is way faster in seeking).

That being said. I'm really curious how the depth maps will work out and how it compares to the heresphere method. Very cool to see al the initiatives and innovation going on at SLR!

(By the way, please don't make this a streaming only option. That would be very dissapointing)

ibins For sure the directly measured depth can be useful. But not all of studios has that equipement. As for the DMs from pure 2d - that is not wery reliable, though there is some success in that field. But in case of stereoscopic depth estimation the problem boils down to stereomatching -how to find the regions on one image that corresponds to the other image regions. At stereomathcing the precision is quiet high. Some of big self-driving car companies are even shifted from separate depth sensors to estimation of depth based on several cameras since higher reliablilty and lower cost.
As for non-standard head position the corrections with DMs can help to fix some of inconsistency, but evidently stereoscopic effect will be lost. Problem of free camera motion can possibly be solved using on-device rendering of reconstructed (in some degree) scene and DM could be helpful for that

    AlexSLRLabs

    AlexSLRLabs Some of big self-driving car companies are even shifted from separate depth sensors to estimation of depth based on several cameras since higher reliablilty and lower cost.

    True but they do it only for the lower cost, they still can not get the reliability of dedicated LIDAR sensors.
    But I guess if it is good enough for self driving cars, it should be good enough for port too

    doublevr changed the title to [Updated videos] SLR Depthmaps in the making .

    Thx AlexSLRLabs
    Great job on that 🔥🔥🔥

    Currently depth might not exactly be obvious, but

    1. it makes it easier for eyes to focus. Many videos were so fad you couldn't watch it. That should be fixed
    2. it removes distortions where things are doubling.

    Once again it's a huge ongoing process. We just started. There's a lot more to come

    I'm really impressed with the new videos.

    What we should do is automated switch to mono feature where users can set a value when the object gets too close to the camera. It's going to be a beta just to explore how things work.

    We definitely can bring things to the next level. It's only beginning

      doublevr

      I am not sure about that, at least for me looking at mono becomes very stressful in close ranges, way worse then the cross-eyed thing you get with stereo which seems to be much more natural

      Btw the best way to see the difference is to pause a video where a girls gets right into the camera and turn auto focus on and off. We will add a biding to controllers mapping with the next release.

      I don’t know if this is depth map related, but if you zoom all the way in or all the way out, it becomes apparent that the video image is being distorted on the inner surface of a sphere to simulate a “zoom” rather than actually just translating the image closer or father. Would this effort also address this type of distortion from zooming? If not, I have a request to be able to zoom in an out without such distortion.

      Vrsumo2017 Idk quest cannot see this file, installing from sidequest has a error "A task failed. Check the tasks screen for more info." help 🙂

      This is beta version of SLR App or just for testing depthmaps effect? Can this function apply for other video?

        RTHK is depth maps demo.
        Currently we generate dm for each video, working to get it automated in the player with real time rendering.

          RTHK right now it's for those download videos posted above. We will feature some few streaming videos with the next update.
          The focus it to make the player create depth maps on fly for every video, both downloads and streaming

            When I "adb install -t test10_tilt_scale_cross.apk" it crashes the adb daemon and "adb: failed to install test10_tilt_scale_cross.apk" with or without "-t" .
            I can install F-Droid, for example, just fine.

            doublevr How do you generate dm for each video? What software, settings. Could you make a tutorial?