The American Society of Cinematographers

Loyalty • Progress • Artistry
Return to Table of Contents
Return to Table of Contents March 2017 Return to Table of Contents
PresidentsDesk
Filmmaker’sForum
Surveying the Virtual World
Page 2
Page 3
Page 4
Surveying the Virtual World

Industry experts share their insights into virtual reality’s evolving production landscape. 




Virtual Reality as a technology and an industry has enjoyed a renaissance over the past few years as digital processing power and sensor technology are finally catching up with sci-fi-driven aspirations. The tools of the trade — software and hardware alike — remain in constant flux, with yesterday’s breakthroughs quickly becoming today’s standards. In separate interviews with a cross section of VR experts, AC asked about the technology, its challenges, audience demand, and where things may ultimately be headed.

The assembled experts were: 

Tim Alexander, lead visual-effects supervisor for new media at ILMxLab. Recent projects include a Star Wars collaboration with augmented-reality (AR) startup Magic Leap; he is currently working on a VR short directed by Alejandro Iñárritu and produced by Legendary Pictures.

Michael Mansouri, an ASC associate and co-founder at Radiant Images, a Los Angeles-based production house and developer of custom VR camera rigs such as the Mobius POV VR 360 camera system.

Sean Safreed, co-founder of Pixvana, a Seattle software startup that recently released Spin, a comprehensive cloud-processing postproduction solution for VR ingest, image stitching and distribution.

Ian Spohr, co-founder, and Evan Pesses, partner at The Astronauts Guild. Recent VR projects include the Golden State Warriors’ 360-degree fan experience and virtual motorcycling experiences for Kawasaki.

Robert Stromberg, co-founder and chief creative officer at The Virtual Reality Co. He is a veteran visual-effects supervisor who helped develop groundbreaking 3D technology for Avatar.

David Stump, ASC, a longtime proponent of next-generation production technology. Recent projects include Life, the first short film shot with the Lytro Cinema light-field volumetric capture system (AC Aug. ’16).

Malte Wagener, head of Deluxe VR. In addition to providing a variety of creative support services, Deluxe VR produces original content including Remembering Pearl Harbor, a collaboration with Time’s Life VR and HTC Vive. 

American Cinematographer: What do you think about the technology of VR as a medium, including recent advancements, current limitations and exciting potentials you foresee in the near and distant future?

Tim Alexander: Real-time VR technology is very functional now but also a little frustrating creatively. In traditional visual effects, a frame might take 20 hours to render, which is not great, but you can spend those 20 hours if you need to. We aim for 90 frames per second for our real-time VR pieces, which gives us 11 milliseconds to render each frame — and there’s no budging from that. 

VR has been around for a really long time, but we’re finally at the point where the hardware is available to most people and can render creative, interesting experiences within those 11 milliseconds. Coming out of visual effects at ILM, xLab is a storytelling division that’s going for high fidelity. We want everything to look as good as possible within that timeframe.

Sean Safreed: Part of it is resolution and showing just what you need in the current field of view. If you want a real sense of immersion it needs to look at least as good as HD. We’re trying to solve that with our adaptive streaming — and in more than 4K. Soon we’ll move from mobile-derived displays to next-generation displays. Instead of 1K or 1,200 pixels per eye, you’ll have 2K resolution per eye. You really need 8K of source resolution to fill the headset display.

My sense down the road is we’ll see something spherical like the Nokia Ozo [camera system], but instead of having only a few cameras, you’ll have a lot of cameras on a spherical surface, and you’ll computationally stitch the final image back together from all of these tiny sensors, which will give you tremendously higher resolution. You’ll no longer be stuck with just a set of 2K sensors each with a giant fisheye field of view, because now you’ll have dozens of these sensors in a sphere. You’ll be able to compensate for the stitching errors and parallax problems because there’s so much data overlapping in that field of view. That’s my prediction of what we’ll see in three to five years: a sphere encrusted with little image sensors all getting stitched within the device into a perfect image with color and depth.

Ian Spohr: With VR, you’re changing rigs like you used to change lenses, from studio modes to smaller, more modular cameras when you need the mobility. You’re also fighting the laws of physics. Digital cinema is catching up to film, where a bigger sensor means better image quality, but in VR we need smaller systems that can achieve that same image quality. Just to be able to look around with a good composition and dramatic elements means we’re shooting VR with camera systems that range from the size of a softball to the size of a go-kart.

Michael Mansouri: A major limitation is not just from the camera side but the headset technology. You actually feel the softness, or ‘screen-door effect,’ in the image, which is due to looking into a diopter at a mobile device screen and revealing around 9 pixels per degree of resolution. A motion-picture film scan is up to 50 pixels per degree, while the best VR headset on the market today is about 14 pixels per degree. In order for us to have the same visual sharpness and clarity, we need to have from 24 to 27 pixels per degree. We’re not there yet, but we’re going to get there; it’s just a matter of time.

Postproduction is also a massive limitation. That’s improving with a lot of amazing technology to speed up and automate that process, so not every frame requires a visual-effects fix. New technology will also enable filmmakers to upload VR content into a cloud to computationally stitch without a lot of manual labor. 

Malte Wagener: The challenge on the interactive VR side is technological, and with 360-degree video it’s more intellectual. For 360, how do you keep the audience engaged after their first, second and third 360 experiences? You’re showing people a certain perspective for two or three minutes for most 360 experiences; how you tell a story is an interesting problem to solve. There’s a lot of collaboration between the new school of VR filmmakers and the traditional filmmaking crowd to evolve this medium into something interesting.

For interactive VR, the challenge is more, ‘How do you make something photorealistic for, say, 90 seconds or more?’ You also have to find effective ways to introduce interactivity and adapt that experience to the ways humans are used to interacting. Another challenge is creating visual appeal with a high degree of polish to an audience using today’s hardware and headsets.

 

<< previous || next >>