You didn’t need a crystal ball a few years ago to predict that VR is meant to be mobile and free of wires. It is inevitable and exactly where the industry is heading. We’ve been promoting immersive mobile VR all along and have been driving the industry to solve the extreme challenges to make it possible. A previous blog post explained how important precise, low-latency 6 degrees of freedom (6-DoF) motion tracking is for intuitive interactions with the virtual world — it is essential for the feeling of presence. This blog post will catch you up on the tremendous progress we’ve made in motion tracking and explain the advances we foresee coming soon.
So that we are all on the same page, 3-DoF detects rotational head movements and determines what direction you’re looking, while 6-DoF detects rotational and translational movements, meaning it determines what direction you’re looking as well as where you are in a VR world.
The difference between 3-DoF and 6-DoF is huge in terms of the user experience. With 3-DoF, users feel like they are watching a story from the outside. With 6-DoF, users become part of the story, allowing them to interact and change the story for a much more immersive experience.
Enhancing 6-DoF motion tracking
The VR industry has made a lot of progress toward improved on-device 6-DoF motion tracking. Initial outside-in motion tracking in 2014 used external sensors, markers, cameras, or lasers placed throughout a room to track 6-DoF head movements. Not only was the setup unwieldy, a head-mounted display (HMD) tethered by wire to a fixed location is not immersive.
Outside-in 6-DoF motion tracking evolved to an on-device inside-out solution that requires no room setup. Our initial inside-out solution used a monocular fisheye camera along with inertial sensors as inputs to our visual-inertial odometry (VIO) algorithm to generate a 6-DoF head pose. At CES 2017, we showcased our monocular 6-DoF tracking demo entitled “ZORDS RISING: Power Rangers VR Experience,” where users explored a cinematic 6-DoF environment by making rotational and translational movements without any external tracking mechanisms.
Stereo fisheye cameras are the next step toward better motion tracking. Relative to monocular 6-DoF, stereo 6-DoF provides instant accurate scene depth, faster initialization, better performance with quick and rotational motions, and improved tolerance to camera occlusion. Intuitively this makes sense since two fisheye cameras can see more of the environment, track more feature points, and are less likely to both be blocked by something, such as your hands. Looking forward, room-scale VR will require great motion tracking as well as object detection and boundary augmentation for users to safely move around a large-room environment. The ultimate goal will be world VR, where users will have limitless movement and robust safety in any environment.
No matter the 6-DoF implementation, power efficiency is essential to creating sleek and comfortable headsets. Due to the challenging thermal and power constraints of immersive mobile VR, efficiently running all the parallel workloads is a requirement. Rather than running our VIO algorithm on the CPU, our optimized implementation runs on the Qualcomm Hexagon DSP, which not only saves power and reduces thermal emission, but allows the CPU to process other workloads.
Bringing 6-DoF HMDs and experiences to consumers
Our role in in making mobile VR possible does not end with the silicon. We are driving mobile VR traction forward on both the device and content side. Our HMD Accelerator Program (HAP) helps VR device manufacturers quickly develop premium standalone VR HMDs. Our VR HMD reference design rigorously selects hardware components, such as the cameras and inertial sensors for 6-DoF, that are tuned and validated to support truly immersive VR experiences. Our Qualcomm Snapdragon 835 VR HMD uses stereo cameras for 6-DoF.
Testing to ensure a high-quality VR experience is also part of HAP. When it comes to 6-DoF, we have several key performance indicators that are important to track. For example, we track jitter, which defines the frame-to-frame changes in VIO pose when stationary; absolute error, which defines the total instantaneous error in translation and rotation in the VIO pose for the video sequence; and relative error, which defines frame-to- frame rotation and translation error in VIO pose. It’s also important to measure these KPIs under a variety of real-world conditions, such as seated or standing, soft-to-bright lighting, low-to-high environmental feature points, and slow-to-fast head movements.
To support the development of premium VR content, we also launched the Qualcomm Snapdragon 835 VR Development Kit (VRDK). The VRDK gives developers early access to a Snapdragon 835 VR HMD and an upgraded Snapdragon VR Software Development Kit (SDK). The SDK is designed to provide developers with access to optimized, advanced VR features on Snapdragon VR devices, while abstracting the complexity to create immersive VR experiences. For 6-DoF experiences, the SDK has an API to give access to both the current and future (predicted) head pose.
6-DoF is a game changer when it comes to the VR experience, so developers need to think a bit differently when creating their 6-DoF content. Here are some tips:
- Scale and space: Since users can have limitless 6-DoF movements in the virtual world but is confined to a small area (i.e., 4 ft. by 4 ft.) in the real world, design for variable space constraints.
- Tracking: Since tracking will sometimes be lost, which is less likely with multiple cameras, fade to black or a fixed image.
- Storytelling: Since users may not be looking in the right direction or be positioned in the desired location to advance the story, provide audio and visual cues to guide their focus.
We’re really excited with the inside-out 6-DoF progress we’ve made to date and look forward to seeing all the amazing 6-DoF VR experiences that developers come up with. Please join our upcoming 6-DoF webinar to learn more.