Since we began showing lots of demonstrations of our content in the Samsung Gear VR, we’ve noticed that a lot of people don’t really understand the difference between what’s stereoscopic 3D and what isn’t. Because of the nature of our content, everything in our library to date is monoscopic 360 video. In many situations, stereoscopic 3D in VR can add a lot of coolness factor to a production, but for some use cases it doesn’t work very well.
iZugar camera mount for stereo pairs of GoPros (photo from iZugar)
What is the Difference?
A standard 360 video is just a flat equirectangular video displayed on a sphere. Think of it like the face of a world map on a globe, but with VR your head is on the inside of the globe looking at the inner surface. As you move, the head tracking on your device moves with you, giving you that feeling like you are inside the scene.
Stereoscopic 3D can add another level of immersion by adding depth data between the foreground and background. Your favorite 3D blockbuster films are typically shot with 2 lenses side by side, to give you a feeling of a different vantage point per eye. Like any production, this can look strange if poorly implemented, or absolutely amazing if done right.
The Challenge of Stereoscopic 3D in VR
With stereoscopic 3D in VR, that depth information has to be overlaid and mapped to sphere. Because of parallax between cameras, this can be especially challenging. Any minor flaws or “stitch seams” in the footage are magnified in 3D, and sometimes anomalies occur in different places per eye - which makes it uncomfortable to watch.
Poorly implemented 3Dx360 video footage can cause a great deal of discomfort to the viewer including headaches, eye strain or nausea. Beyond the physical, there can also be production quality issues. Objects and people in poorly implemented stereoscopic 3D can look like crude cardboard cutouts, and chromatic aberration becomes very apparent. Chromatic aberration is that magenta or green “fringe” you see on the edges of objects through some lenses, it’s an optical flaw that we find is much more noticeable in stereoscopic 3D.
Perfecting 3D for VR by Limiting Variables
Flaws in stereoscopic 3Dx360 video can be avoided by shooting in a controlled environment. If your actors or subjects are instructed to remain a specific distance from the camera and remain stationary in the same quadrant of the shot, you avoid having them cross over the stitching seams. By not moving the camera, static scenes can be set as a backdrop for the 360 itself, while 3D subjects are composited into the footage. And of course you could also rotoscope every single frame to remove anomalies as well.
So typically the best 3D video content we see for VR is very controlled and static. Our favorite stereoscopic 3D for VR was with the release of the Samsung Gear VR from Felix & Paul Studios - but the majority of those shots are kind of the same, although very compelling and the best, most comfortable 3Dx360 we’ve seen in VR. Could new camera technology help make shots like these dynamic with motion?
Because of these limitations, in my opinion 3Dx360 is not an ideal use case for news gathering, live events, sports, extreme sports or any situation with lots of variables and moving parts. You’re going to make people uncomfortable trying to watch content like this in 3D. Some producers simply eliminate the FOV by shooting a 180x120 shot with what seems like 2 forward facing cameras with 180 lenses. In this case, you have some limited head tracking to look around but you can very easily find the boundaries.
Example of a Red stereoscopic setup (photo from MTBS)
A lot of the content in 360 Labs video library would not be possible in stereoscopic 3D, so we focus on making our monoscopic 360 shots as high quality as possible - at least until we can perfect a shot with motion and variables in quality 3D. But perhaps this won’t be good enough until the next generation of displays come out.
The Case for Monoscopic
With a monoscopic video, you’re getting more resolution out of the device because you’re not having to stack the left and right channels (or top and bottom channels) within the phone’s limited resolution. Oculus CTO John Carmack explains, “People that are resolution-picky will probably prefer monoscopic videos, which can have twice the resolution of stereo videos. The stereo effect may not be worth anything to you if you can't get past the blurring.”
As I’ve already mentioned, a lot of our viewers get confused about the difference of true 3D stereoscopic and monoscopic footage. Often times they’re fooled into thinking they’re looking at a 3D video. In a way, it is 3D because it’s projected on to a sphere, but it has no depth information between background and foreground. But after hundreds of tests and demos, I don’t think the general public even cares.
Most of the time, our goal at 360 Labs with virtual reality is to capture real life experiences. But how real is the experience when every shot is completely static and you have to instruct your subjects on where they can stand and where they can move? This doesn’t happen in the middle of a raft on the Colorado River in the Grand Canyon. This doesn’t happen on the back of a kiteboarder, or speed flying down a mountain.
While I’m sure that we’ll continue to research and test stereoscopic live action 3D footage for VR, for now it really doesn’t make sense for the majority of our projects. We want to see the beautiful places we go to in as much resolution as possible. But who knows what the future holds; displays, resolutions, processing power and bandwidth will only get better. We look forward to testing the future of VR!