The Samsung Developer Conference was a first time experience for me this year. We normally don’t attend developer conferences, since 360 Labs is a production company focusing on the creative side, however we were very interested in Samsung’s advancements in cameras and the VR ecosystem. After all, Samsung Gear VR is our favorite HMD to show our content. I learned a lot of great things about what Samsung is doing with Milk VR, how they intend to help both professional and consumer contributors distribute content, and got a great look at footage and samples from the Project Beyond camera. Here’s a few highlights from my trip.
DJ Koh, head of mobile at Samsung, kicked off the event with some incredible stats. He mentioned they predict more than 21 billion smart devices by the year 2020. Even right now there are currently 3.4 billion smartphones activated, with 80% of those being Android. This is a huge audience ready for viewing 360 video and mobile VR content. More than 2 million hours of video content have been consumed within Samsung Gear VR.
A Full Ecosystem for VR Production
Samsung announced a full ecosystem for VR including content creation, distribution and consumption. The one huge step they seem to have overlooked is editing. I know that editing 360 video content is difficult for consumers, but we can probably all agree that without video stabilization and at least some basic editing, we’re going to have a lot of absolutely terrible content that will make a lot of people motion sick.
However, the concept hasn’t been entirely overlooked. Samsung was promoting a new app called Unclip, which would allow you to easily edit footage from your phone. A 360 video editing feature will be offered in the future. Release dates were not announced. With any luck, maybe they’ll look at providing some stabilization as well.
GoPro seems to have Samsung beat on the editing department, with their recent purchase of Kolor, a pioneer in video stitching and stabilization. But to be fair, GoPro’s 360 camera solutions may still be daunting and unaffordable for the consumer market.
The Gear 360
Everyone at the conference who paid a full ticket price received a free Gear 360 camera. In the advance of the release date in some countries this Friday the 29th (April 2016) , there’s likely already thousands of these cameras out there capturing shots right now!
The camera boasts 30mp stills, a 2.0 aperture lens, and shoots both 360 video and stills. Photos have HDR mode, adjustable white balance and ISO. You can also live preview footage directly on your S7 (but not S6). The camera can also shoot one lens at a time, which is a great feature if you want to shoot panoramic photos but avoid having yourself in the image. Video content can be published directly to Samsung’s Milk VR platform. We expect that public facing portions of Milk VR will likely still be curated, however your own videos will be instantly accessible and sharable via the platform.
Gear 360’s price is right for consumers, considering it’s going to be around $400. But the stitching is very rough, if you can even call it stitching. It seems to be crudely slapped together. However, that issue won’t be a concern for most consumer use cases.
The Milk VR SDK
I think one of the most exciting discoveries for me at SDC would be the features coming from Milk VR. This may be old news to some of you guys who have partnered with Samsung and have been working on special projects with the Milk VR team, but finally all of these features are now becoming available to all producers.
The big one is interactive hot spots, allowing the viewer to navigate to different parts of the story by simply gazing at a hot spot during a specific determined time. This feature was seen in the series “Gone,” but will soon be made available to all content creators and configurable through the upload interface. Milk VR will be first to introduce this feature, beating both YouTube and Facebook to the punch.
Live streaming 360 content will also be made available soon, but currently now you can play your own hosted live streams through the Milk VR interface. You just give Milk VR a URL for your own streaming content. In the future you will also be able to live stream directly to the platform with Samsung’s cameras.
The Milk VR Upload SDK will allow camera manufacturers the ability to create a Milk VR publishing account on the fly and make use of the some direct upload features that Samsung Gear 360 will use. This means all 360 cameras will have easy access to directly upload content from the camera to the platform.
For those of you who demo content in Milk VR, you’ll be excited to hear that you will be able to brand your own landing page for the app by supplying your own cube faces for a panoramic 360 background. This has already been seen for partners such as CNN. A retail mode will also allow Milk VR to start on your own channel page, skipping the other categories and starting on a limited the view of only your content. This will be perfect for installations, retail demos or content creators.
For the first time publicly, they also released details on how to do synchronized playback to multiple Gear VR headsets running Milk VR. This allows you to start content at the same exact time on multiple headsets, much like Samsung does in the rollercoaster simulator they have been bringing to trade shows.
Inputs, Inputs, Inputs
There was lots of great discussion around input devices for VR and how we will control or navigate within the content. Many of the controls only previously seen in an Oculus CV1 or an HTC Vive are coming to mobile VR very soon. But they might not all look like what you think.
New startups are pioneering advancements in gesture and hand recognition. One such company is Presto, they’re developing solutions to detect gestures from your smartwatch and integrate it with the mobile VR display in your headset.
The Gear VR is also getting an open source framework for application development (GearVRf). By this may or June, they’ll open source 3D cursors and gesture controls which will allow users to interact with content in Gear VR with the use of input controllers, very similar to HTC Vive and others. Find out more on their website.
But none were so impressive as Leap Motion. David Holz, CTO and founder, gave an amazing talk about where Leap Motion is now and where they are going. I first looked into Leap Motion and bought a dev kit when the demos were quite rudimentary, but the basic concept was there. They’ve come a long way since then. The hand recognition is very accurate and extremely fast. The potential here is that you don’t need an additional physical input device to navigate in VR. Holz showed a demo in which a menu item could literally pop out from the palm of your hand, the fingers become natural “tabs” in your menu interface. Even a wrist can be recognized as a menu UI anchor point.
Content Creators talk Storytelling and Workflow
I have to give kudos to Eric Darnell (Antz, Dreamworks, Madagascar) for an incredible talk. He showed a new trailer for his animated VR short “Invasion.” I’d highly recommend checking it out in the Milk VR platform if you haven’t. Eric started his presentation with a slide that made me almost laugh out loud, entitled “VR is not film.” It’s a brand new medium. Nobody knows anything. Don’t believe them if they do.
I couldn’t agree more. Eric essentially went through a list of “Don’ts” that have been thrown around in the VR content creator communities and debunked a whole lot of them. We’re all just figuring this out. All we can do is try new things, test them, and try again. He shared a lot of his experience in working on “Invasion.”
Firstly, we all talk about losing control of our audience. We can’t control where they look. Or can we? If our subject matter is compelling and our characters are interesting, viewers will actually want to follow them rather than look elsewhere. The power of eye contact is huge, especially in VR. It makes us feel connected with the characters, but also leads us to look at other things when characters turn to look away, we want to know what he/she is seeing over there.
Also, pacing is a function of the interest of the viewer. We’ve all heard people say you should have no quick cuts in VR, I’ve even said it myself in criticism of some music video content. But we’re learning that if the elements of the story support it, cuts can work to enhance this experience and are not jarring if done right. This reminded me of a recent test published by Nick Bicanic, which is definitely worth a read if you are interested in editing VR content.
Finally, he had some great notes on audio. The most interesting one was about the volume of audio queues. When exaggerated, many more viewers followed the story rather than look away. They also experimented with some spatialized music, but found that it was too distracting. In circumstances where you are actually listening to musicians, it might make sense, but for a narrative with characters it felt more natural to just use a stereo track.
I also caught a panel with several folks from news organizations, including Ed Thomas (CNN), Ted Schilowitz (Fox), Sam Dolnick (New York Times), Niko Chauls (USA Today), and moderator Tom Standage (The Economist). The most interesting part of this talk for me was the Q&A session and questions about monetization. I was actually a little bit surprised by how dated some of the ideas were around this topic. Pre-roll ads were mentioned, even pay per view. So far the New York Times seems to have the right idea, featuring full VR experiences for their sponsors within their app that viewers actually choose to watch.
For me, VR video is about experiencing something and having fun. Viewers will seek out VR experiences from the brands they love and actually opt in to watching the content. They don’t even know they’re being advertised to, or if they do, they don’t care! And what about product placement, in app purchases? Neither were mentioned as a viable option for monetization.
This talk also made me realize just how difficult the daily struggle of publishing this content can be for news organizations. It seems almost naive that just about everyone in the panel was in agreement that sometime in the very very near future (we’re talking months away), a camera that will automatically stitch will solve all of their problems. These guys seem to have a lot of faith in camera manufacturers. No doubt, new technology will make this job much easier in the future, but we have a lot of wasted investor money and kickstarter failures in our future before that happens.
The Future Seems Bright
Since I said I would talk about Project Beyond, I’ll end on that note. I was finally able to watch footage samples, as explained to me it was “straight off the camera.” The stereoscopic 3D from Beyond seemed to be more natural than what I’ve seen from OZO, but had some obvious double-vision stuff happening in the middle. Being straight off the camera, no adjustment had been made and scenes hadn’t been optimized for distance to subject. Scale was accurate, people and subjects looked the right size. Most importantly, I felt comfortable watching it.
There was noticeable pinching and seams on the nadir and zenith, but in my opinion it’s much better than not having it at all (like GoPro Odyssey). I would also gladly trade a funky nadir/zenith for that big ugly back stitch on the Nokia OZO any day. Price and availability are still unknown, but I’ve been told to expect that it will be very reasonable in comparison to the OZO. I’m hopeful that Project Beyond will be a good solution for VR content creators looking to make the jump to stereoscopic 3Dx360.
Overall, I was seriously impressed with Samsung’s love for its developer community and the solid roadmap they have ahead that includes VR every step along the way. I’ll definitely be back next year.