Seems like 8k is the max now, and videos are starting to look worse--more blurry, out of focus and less realisitic. Is this because Q2 can't handle the 8k the way it needs to and these videos are really being shot for Quest Pro/Quest 3? Just curious. And will we ever see 9k or 10k in Quest Pro/Quest 3? Is that possible? I'm just curious how much better it can get. Because while it's SO much better than regular porn (can't even go back now), I feel like I'm getting bored with the quality lately. There were huge jumps in quality from 2017 to now, like MASSIVE, but it seems that it's kind of peaked and it's going to be difficult to get it much better than this until we have the battery capacity and lens capacity.
Have we reached the limit for Quest2?
Thats why I'm thinking about getting a pico 4, better fov, higher reso and color passthrough.
pcvr is the answer if you don't care about money and ask best quality. like pimax + 4090
Well be stuck at the max dimensions of 8192x8192 for a while though (even on a 4090). Maybe the next generation will have support for h266 which could mean support for up to 16k
i don't think so. I also watch content from other producers, and recently started watching POVR's stuff and holy cow the video quality is WAY better from POVR than SLR. Like mindblowingly better. I assume SLR has lowered the recording bitrate as well as streaming bitrate or something. seriously get your hands on a POVR video and tell me it's not worth it. Granted SLR has a way better platform (the POVR i guess used to work with DeoVR until SLR killed their access) but POVR's production quality looks way better on my quest 2. Take a look at the latest SLR taboo release, blown out faces and colors make me think the bitrate is down.
Hurto11 And will we ever see 9k or 10k in Quest Pro/Quest 3? Is that possible?
It is possible, but it is a more complex way of doing things.
It can be done through viewport, like how we managed to get 5K on the Oculus Go.
Or, you use hardware that can manage 2x 8K decoding jobs simultaneously.
Or, you develop new codecs and new hardware decoders.
But, there is actually still a lot of room within 8K for improvement.
- Better hardware: HDR, 4:4:4, improved color gamut. display pixel density [1] (MicroLED, then NanoLED).
- Better encoding techniques: AV1 & H.266
- AI: no really, not making it up. 'Parasitic prediction' for example.
- Higher bitrates.
- improved lenses.
[1] Mind that 8K in VR are 2x 4K images. The FOV of those images also plays a big role.
A 4K image with an FOV of 90° is better quality than a 4K image with 180° FOV.
- I'm talking about the actual image, not the headset FOV.
- Edited
@Hurto11 I used to remember 1080p and even 720p being the sought-after resolution several generations back of HMDs. Beyond a doubt, this conundrum of ours is cyclical in nature Definitely hardware advancement plays a key role. With Apple jumping into the fray with its dual 4k micro OLED. The landscape for the need of higher resolution will definitely change and raise the resolution bar.
Hope that our pocket can withstand it though...
I think something like Stable DIffusion will make some breakthroughs in VR in terms of improving resolution and quality of details. There are already some papers on predictive raytracing for example. Instead of getting the GPU to calculate the light bounce , light direction, color bleed and all the GPU intensive processes, the AI can just predict how the frame is going to be ray traced in a couple of seconds. All you need to do is just tweaks the seed for the intended results.
bobbytables POVR/Wankz has the best rig IMHO. I don't know what their secret sauce is but with image clarity they nailed it. Their lighting / make-up / styling / camera positions / locations and cast are not always the best though. There are very few studios with production value consistency - seems like as soon as they figure shit out, they expand, hire noobs and quality suffers.
damson realistically stable diffusion will do nothing for vr tech, it uses a process called diffusion which is just learning what a picture of an apple is. It won't help with anything. The best piece of tech and likely the biggest jump until we get better processors is something called "Frame Reprojection", but that's already implemented (and is why we can watch 8k content or play crazy games on the cheap processor of the quest 2).
vrpicasso if POVR ever figures out how to make an actual decent platform with 80% of the feature set from DeoVR i'm jumping ship in a heartbeat. Those studios are at like 80mbps while SLR is at like 40-50mbps, meanwhile they've got lighting and color grading down to a science. It's baffling that SLR is spending money on passthrough and even making their own handy competitor when they can't even get their regular content figured out.
bobbytables if POVR ever figures out how to make an actual decent platform with 80% of the feature set from DeoVR
It plays VR porn pretty nicely.
It can be used with HeresphereVR and XBVR which is better than DeoVR anyway.
What else do you really need?
Yeah.
Hurto11 yes, 8K is maximum for Quest 2 https://www.sexlikereal.com/blog/67-vr-videos-explained-4k5k6k-fov-fps-decoding
It's also true that higher resolution downscaled to lower will be better quality than lower resolution. That's why we are building 10k camera.
- Edited
Rakly3 interesting. Will have to test playing two 8k files at the same time some time.
BTW, have you ever considered / thought about the following?:
So currently the max decoding dimensions for a h265 video file is 8192x8192 at 1:1. However, since vr is shot in 2:1 scale (meaning that max 8k would be : 8192x4096) you only utilize half of the pixels. So what if you had a custom lens profile that has double the pixels vertically (e.g. double the resolution vertically by squashing the image vertically on a larger texture)?
An alternative would be to have a different type of projection all together or at least "cut it up" in such a way that the full 1:1 or 8192x8192px is utilized (not sure how that last one would work though ).
Seems to me that could potentially allow you guys to DOUBLE the resolution, which is pretty huge I think.
Anyways, I'm pretty curious if you've considered this and what your thoughts are
Rakly3 your remark of playing two files at the same time got me thinking. One would probably be able to decode at least 2 times 5790x5790 or 11580x5790 total as it's the same amount of MP as 8192x8192.
That's seems very interesting to me also, especially considering you have a 10k camera coming up.
I'm sure you thought about these kind of things as well but I'm just really curious about what will be possible the next few years. It would just be amazing if we could surpass that pesky 8k limit the next two years (or maybe 4 if nvidia brings the argument that higher than 8k resolution has no practical purpose again).
- Edited
fenderwq However, since vr is shot in 2:1 scale (meaning that max 8k would be : 8192x4096) you only utilize half of the pixels. So what if you had a custom lens profile that has double the pixels vertically (e.g. double the resolution vertically by squashing the image vertically on a larger texture)?
Lenses don't have pixels. But I understand what you mean. You're describing for a large part fisheye lenses. The reason SLR Originals use the fisheye profile is because the closer to the center you look, the higher the pixel density is of the image.
What you are proposing would be like a barrel lens, but horizontally. Or maybe better, fisheye + barrel would be some sort of oval shaped lens. I see it as technically possible. But I don't think headset manufacturers would go for it as it would mean they have to completely redesign their software and lenses too. It's a good idea though!
fenderwq An alternative would be to have a different type of projection all together or at least "cut it up" in such a way that the full 1:1 or 8192x8192px is utilized (not sure how that last one would work though
).
Yes, that is Viewport. Still would need new hardware though, or more than one decoder. Viewport 5K doesn't have the hardware limitation.
fenderwq (or maybe 4 if nvidia brings the argument that higher than 8k resolution has no practical purpose again).
Technically Nvidia is right. it's a lot easier to do parallel decoding of two 8K streams since it doesn't need any R&D. They already can do it.
Consumer grade Nvidia GPU's have 2 decoding streams (artificially limited)
Their professional GPU cards are 4 streams or 'unlimited'. The unlimited is a whole other topic though. They can make clusters of thousands of GPU's.
Blockchain also can make it possible with distributed computing (though pointless for decoding). But there are already projects in the works for this for encoding.
One of our devs even already ran a test project with one of my crypto mining rigs to upscale and encode to 16k by splitting the work over multiple GPU's. (This doesn't mean we will be having 16k video, it's for something else; SLR is more than just a studio )
Rakly3 Lenses don't have pixels. But I understand what you mean. You're describing for a large part fisheye lenses. The reason SLR Originals use the fisheye profile is because the closer to the center you look, the higher the pixel density is of the image.
true, although if someone wanted to be cheeky they could bring up that lenses do have a maximum resolving power and that's usually measured in pixels, but that doesn't really have anything to do with what you're talking about
- Edited
Rakly3 Cool, thanks for the reply, can't help but love this kind of tech talk Some really cool ideas there, didn't know about the parallel decoding either. Gives me hope that we don't have to wait for 8k+ for years to come!
Guess I skipped a few steps in explaining what I meant with a "lens profile" though. To clarify, what I meant is that you convert the footage of two cameras (one for each eye) that have higher than 4k resolution to a single elongated (fisheye) side by side output movie and then in the software tell it to interpret it as if it was shot with a lens that has twice the pixels vertically. Similarly to the way you have to interpret fisheye content differently as you would equirectangular content in DeoVR (this is what I call a lens profile btw but maybe that's incorrect ). This way you would still be inside the resolution limits of the decoder but also have twice the vertical data for rendering the texture (theoretically at least and with 2 8k camera's).
The second one was indeed what Sandi_SLR showed (thanks for the cool visualization btw). Correct me if I'm wrong but once you decode this 8192x8192 image it's up to different parts of the GPU to map it to a texture and display it in the headset right? So in theory you should then be limited by the "overall power" of your gpu and no longer by the hardware decoder?
Anyways, however much I like to go on about this kind of tech talk, do you think it realistic that we could see something like 10k/12k or whatever during this generation of video cards? What are your thoughts?
- Edited
I certainly hope we don't have to wait for the RTX 5000 series or worse in 2024/2025 to at least start seeing some experimental 10k/12k files, file size be damned. 50 GB, load me up. As most in this thread probably already know, the current mid to high end headsets have already maxed these files out. However, this is going to get worse soon.
2023
Quest 3 - Rumored by SadlyItsBradley to have 30% higher resolution, which is consistent with past increases
Pimax Crystal - Insanely high 2880x2880 per eye resolution with QLED, quantum DOT layer and local dimming
Apple - late 2023, maybe? 4k per eye rumored
2024
Quest Pro 2 - per Meta's usual cadence (year unconfirmed but something either 2024 or 2025 is certain)
Other high end headsets - particulars are unconfirmed but as a law of averages we have to see something from at least some of the following: Pimax (12k), Valve, HP, Samsung, etc.
So for the love of all that is enthusiast VR porn, SLR or whoever please get us some 10k content. Even 10k at approximately 10,000 x 5,000 (50MP) would blow 8k out of the water, so this would hold us over a few years.
fenderwq One would probably be able to decode at least 2 times 5790x5790 or 11580x5790 total as it's the same amount of MP as 8192x8192.
There are dimensional limitations too, and not just the MP. I don't know them off the top of my head, as this also depends on the codec. But let's assume we are talking about mpeg-4 group H.264 & H.265
(don't confuse with mp4 container file format.)
Slicing up the image as proposed by @Sandi_SLR shouldn't cause issues with displaying the image correctly for a computer; There is no transformation of the image, just location of the pixels. It's no more difficult as playing video in mirror or upside-down, equirect, fisheye, cubemap.. - Textures on 3D models, and cubemaps in a game are way more complex than that and done in real time.
VR on YouTube already does this. Image search ' youtube cube wrap vr '
The dimensional limitation comes from how many slices can be used and their maximum size. Known as Macroblocks.
In H264/AVC they are straight forward (doesn't mean it's easy material), but become a lot more complex in H265/HEVC.
https://en.wikipedia.org/wiki/Macroblock
Ever seen blocks in an image, or wonder why artifacts are always square-shaped? (Rhetorical)
https://en.wikipedia.org/wiki/Compression_artifact
These macroblocks are also where the max resolution comes from.