Netflix and Oculus announced yesterday that they had been collaborating to build a VR movie-watching application. The app, written by 3D programming legend John Carmack, includes a virtual living room and takes into account the viewing angle such a device requires.

At Oculus Connect, a developer event hosted by Oculus, Carmack, the company’s CTO and co-creator of Doom and Quake, took the stage to demonstrate his work on the application. He later posted a blog entry on Netflix’s site that detailed his process of making the application.

(Related: Carmack develops Scheme scripting language for Oculus)

Rather than simply stick video right in front of users’ noses, Carmack opted to build out a virtual 3D space and to place a big TV at the center of it.

The app is aimed at users of the newly introduced US$99 VR goggle attachment for Samsung phones, the Gear VR. This device turns a user’s phone into a VR headset, complete with head strap.

Carmack explained in his blog post that the actual job of building a virtual TV set for VR users was not exactly cut and dried. “The Netflix UI is built around a 1280×720 resolution image. If that was rendered to a giant virtual TV covering 60 degrees of your field of view in the 1024×1024 eye buffer, you would have a very poor-quality image as you would only be seeing a quarter of the pixels. If you had MIP maps, it would be a blurry mess, otherwise all the text would be aliased fizzing in and out as your head made tiny movements each frame,” he wrote.

“The technique we use to get around this is to have special code for just the screen part of the view that can directly sample a single textured rectangle after the necessary distortion calculations have been done, and blend that with the conventional eye buffers. These are our ‘Time Warp Layers.’ This has limited flexibility, but it gives us the best-possible quality for virtual screens (and also the panoramic cube maps in Oculus 360 Photos). If you have a joypad bound to the phone, you can toggle this feature on and off by pressing the start button. It makes an enormous difference for the UI, and is a solid improvement for the video content.”

That was not the end of the display problems, either. Hollywood essentially mandates that all videos be processed in a secure chunk of memory to prevent people from capturing the stream and saving the film. It’s copy protection.

“The problem for us is that to draw a virtual TV screen in VR, the GPU fundamentally needs to be able to read the movie surface as a texture,” wrote Carmack. “On some of the more recent phone models, we have extensions to allow us to move the entire GPU frame buffer into protected memory and then get the ability to read a protected texture, but because we can’t write anywhere else, we can’t generate MIP maps for it. We could get the higher resolution for the center of the screen, but then the periphery would be aliasing, and we lose the dynamic environment lighting effect, which is based on building a MIP map of the screen down to 1×1. To top it all off, the user timing queue to get the audio synced up wouldn’t be possible.”