I recall a discussion about this topic from some months ago.
AI upscaling for VR has potential. However, for there to be visible improvement one would have to improve the textual models generated specifically for VR -- from what I've been told, and what seems logical.
One would have to begin with the studio file or the original recording in highest resolution, so the film doesn't suffer from iterative downgrading (through compression from the various encoding formats, most of which are lossy, not lossless).
A company, like Topaz, would then have to re-train their AI working from images captured with a fisheye lens.