Hairsational I completely agree with you that the comparison is pointless because the HEVC master will obviously look better, but as I said in the last post, the opposite (i.e. that the quality difference is marginal because AV1 is more efficient) is SLR/doublevr's whole argument (see the bitrate comparison and AV1 announcement forum threads). So in that sense it's the only comparison that at least addresses people's complaints, even if the outcome is already obvious from the outset.
h265/HEVC v AV1 masters fight
It's decided guys - People saying AV1 Low Bit Rate is better than or equal to HEVC HBR are like Flat Earthers. Boom!
petex67 LOL! What?! Who said that?! Like seriously?
HEVC with high bitrate will beat AV1 with low bitrate at same Resolution/ FPS in terms of visual quality. Yes, AV1 is SUPERIOR to HEVC in almost everything. But that's not meant we should treat it like a magical codec or something.
Like @Hairsational said, it's still need to be encoded with appropriate parameters like reasonable bitrate.
And that's depends on many factors like Resolution/ Frame rates/ Color depth/ The nature of the environment or project/ The more objects/ movements/ elements/ particles/ changing shadows or lighting...etc...
The more and higher of them, the higher the bitrate should be applied regardless of the codec and that's still true in case of AV1. I encoded a 8192x4096 5 minute videos at 8-bit @ 60FPS at same color grade with both HEVC and AV1 with 200 Mbps using CPU encoding (Because it provides higher quality than GPU encoding regardless of the longer time it takes)
And both of them were IDENTICAL in video quality. I just couldn't see any difference no matter what I try zooming or looking anywhere. I just couldn't locate any difference in visual quality!
The only advantage of AV1 in my situation, it just was slightly smaller in video file size, that's all. Which is not bad of course, But does it really worth the encoding time and hardware compatibility sacrifice when it comes to 8K @60 FPS?
It's up to you guys to answer this.
I believe AV1 will really shine in future 16K resolution, at that time it's game over for HEVC since it cannot support this resolution and I highly doubt even MV-HEVC or future high resolution codecs can compete with it, especially because it's royalty free and will keep advancing and developing.
Best Regards.
- Edited
You got the same/similar file size precisely because you used used a constant bitrate. In your test the AV1 file is very likely higher quality but not perceptibly higher to the human eye because you used such a high bitrate. If you lower the bitrate then the higher quality of the AV1 file would start to become more noticeable as HEVC reaches a point were it can't compress the video enough to match AV1 quality.
I think we can almost all agree that the SLR's videos bitrate is too low.
Hairsational Yes, and you're absolutely right.
My tests were an attempt to simulate the "SLR Magic Encoding" since you know, this is what we have to deal with here.
They either use very low VBR for streaming or very high CBR for originals. Without any middle ground "wisely enough".
And as you know, I'm biased towards visual quality, so I didn't bother to simulate the low VBR and invested my tests with high CBR instead with the results I mentioned previously.
But otherwise, you're very right, and AV1 will beat the heck out of HEVC if it's done with reasonable bitrate.
PS: When I talk about visual quality in most of my statements, I always refer to the Original/ Studio files.
But I got mostly misunderstood, maybe because I don't use the phrase "High bitrate" to refer to them.
I really hate to describe them with "High bitrate" since they are just the "reasonable" bitrate for such high resolutions with constant movements are part of their core, you know what I mean
But I think I'm forced to describe them with "High bitrate" from now on to eliminate any future misunderstanding I guess.
Respectfully.
VRXVR This is a bit offtopic, but I'm not sure about the future of AV1 for 16K VR. I don't know if Wikipedia is outdated, but while the highest level (6.3) does support 16000x8000 as a resolution, "MaxDisplayRate" and "MaxDecoderRate" (not sure about the difference tbh) are only 4,278,190,080 and 4,706,009,088 respectively. 12000x6000@60FPS are already 4,320,000,000 samples per second, which is at least outside the MaxDisplayRate spec. So even if you can encode 16K 60FPS VR videos with AV1, it seems to be outside the specifications, which basically means that the chance of having a hardware decoder for this content should be really slim? And good luck using a software decoder for that! That's also why the two VRBangers 12K releases are using AVC/H.264 at High4.1 profile instead of HEVC, because 12K@60FPS is outside HEVC's official specs.
Like I said, maybe I'm missing something here and people are planning some kind of specification extension, but I feel like VR content (2:1 aspect ratio, high resolution, high framerate) is not really on the codec committees minds when making these specs.
phiber Not off topic at all! Yes, you're very right, the AV1 Specs describe "mostly" flat videos. At least currently. But the more popular VR will be the more and more codecs start to priorities it in their specs.
And 16K @60FPS is not outside of the AV1 specifications actually, because AV1 maximum tile width is 4096 pixels. So, 16K @60 FPS should be possible with 4x4 tiles for 180°+ FOV VR Videos, for example.
As for 16K @60 FPS AV1 hardware decoding, Nvidia 50 series, more specifically RTX 5080 and RTX 5090 has multiple 9th Gen Encoder (NVENC) and Decoder (NVDEC).
Still not sure about the encoding capabilities at the moment, but their NVDEC can easily decode AV1 8K @60 FPS per decoder, which means 16K @60 FPS hardware decoding output with both decoders in parallel use.
https://en.wikipedia.org/wiki/GeForce_RTX_50_series#Media_Engine_and_I/O
Also, the Intel 11th Gen CPUs and newer can support AV1 16K @60 FPS natively along with Intel Iris Xe MAX mobile GPUs and Intel Arc A Series desktop GPUs according to their media capabilities:
As for 16K @60 FPS AV1 hardware encoding, we still in the early years for that to happen currently. So, software encoding is the way to go, and yes, as you said, we need a freaking High-end PC with at least 64GB of RAM to do that, like with "SVT-AV1" for example:
https://github.com/psy-ex/svt-av1-psy/blob/master/README.md
Damn, just searched for "VRBangers 12K Videos". They are 12288x6144 resolution @60 FPS @90 Mbps bitrate with AVC/H.264 Codec.
LOL!!! WTH, they were thinking!!! This is almost unplayable even with powerful PCs! Thanks for the heads up man! I really laughed when I read about this.
Anyway, I think they could "remaster" them now with AV1 16K @60FPS for RTX 5080/ 5090 GPU owners, the whole three of them! lol!
Best Regards!
boudaba DuoVR
That a new vr player bro? Link?
Vrsumo2017 A typo, I believe he means DeoVR.
petex67 as a flat earther I take offense to that statement, even we have brains cells haha
So is all this debating just about download quality or is this affecting streaming quality too?
VRXVR lol, oh
VRXVR I have to be honest, I am not sure how the tile width actually plays into this. Like I said, looking at https://aomediacodec.github.io/av1-spec/#annex-a-profiles-and-levels it seems that the currently highest defined level (6.3) does not support 2:1 16K@60FPS simply on the number of luma samples that would need to be decoded per second. But that link also shows that they already planned for a 7th level, so I guess that if AOMedia sees the need they will define it. That said, 16K@60FPS is 4x as many samples as 8K@60 because both width and height double, so I am not sure about current hardware decoder capabilities even when looking at 5080s and 5090s.
And yeah, I agree that the VRBangers 12K videos are a joke, although it's also incredibly lazy of AMD/NVidia/Intel to not support High10 AVC hardware decoding, as that would completely solve the issue. Apple does, although I don't know if their hardware would be powerful enough for 12K@60FPS.
phiber You're right, researching more into this, and it seems the resolution to aspect ratio is the issue here.
As you said, to be native 16K resolution without any cropping or upscaling and suitable for the standard side by side equirectangular 180° VR Video, we need 2:1 aspect ratio which is not achievable currently according to the AV1 latest 6.3 level. Hopefully they will address this in the next/ future levels.
As for the AV1 Tile width, I was talking from encoding standpoint of how we can achieve AV1 16K in the most reliable and fastest way with software encoders like SVT-AV1.
Because, currently, as of writing this, the most reliable and fastest way to encode and decode AV1 16K video is by using parallelism, mostly because of the current software encoders and hardware decoders limitations.
Since the Stereoscopic video sides are logically separate, we could also encode/ decode them separately. And AV1 pretty much supports this and here were the AV1 Tiles comes to play when encoding 16K resolution. This way it can be "fast" to encode and "light" to decode thus smooth to playback.
Because AV1 tiles are not fixed, but "flexible" which means they can be encoded in uniform and non-uniform tile spacing, and can be grouped into tile groups and each group can be encoded/ decoded independently.
This is the reason why RTX 5080/ 5090 have 2 Decoders, mostly to deal with "Multiview Videos" in very high resolutions like MV-HEVC for example:
https://blogs.nvidia.com/blog/generative-ai-studio-ces-geforce-rtx-50-series/
Although AV1 does not mention the same approach officially, but it actually can be done through SVT-AV1 and FFmpeg according to some 16K AV1 encoders as I read.
Personally, I can't confirm any of this, as I never encoded videos higher than 8192x4096 @60FPS @10-bit @400Mbps CBR for both AV1 and HEVC. Because that's the maximum of what my PC can encode easily and my RTX 4080 GPU can decode and play smoothly.
The VR 16K resolution videos are pretty much still undiscovered territory and most of us couldn't explored yet. When it comes to any VR video resolution higher than 8K, I'm still learning myself.
Now, that we'll finally have 16K VR Cameras like "Blackmagic URSA Cine Immersive" which also uses dual 8K sensors to achieve this ironically.
This will trigger everyone in various industries to invest, develop and advance the VR 16K @60FPS encoding and decoding and in both software and hardware for sure. Which will make the encoding/ decoding process less complex for the user and less taxing on the hardware.
Best Regards.
AVP and recently the RTX 5000 line added hardware support for MV-HEVC. A format basically MADE for VR. My guess is that that's the future not AV1. Might take some time before enough devices support it though so I get why we have AV1 at the moment. But I think it would have been a more interesting development if they added MV-HEVC instead (even though I can't even play it yet). Bet the next quest, etc. will support this too.
'MV-HEVC is an extension of the High Efficiency Video Coding (HEVC) standard, designed to efficiently compress multiple video views of the same scene captured from different vantage points. It addresses the limitations of traditional video-coding methods such as simulcast encoding, which often resulted in high bitrates for multi-view content and lacked efficient inter-view prediction."