Increased definition and sharpness
Immersive experiences stimulate our senses—they draw us in, transport us to another place, and keep us in the moment. Immersion enhances everyday experiences, making them more realistic, engaging, and satisfying, on all our devices—whether we are playing a video game on our smartphone, video conferencing on our tablet, or watching sports on our virtual reality headset. Our goal is to provide the appropriate level of immersion based on the device form factor, activity, and context.
Qualcomm Immersive Experiences
10 Nov 2015
The three pillars of immersive experiences are visual quality, sound quality, and intuitive interactions. Full immersion can only be achieved by simultaneously focusing on the broader dimensions of these pillars.
Visual quality isn’t only about the quantity of pixels, such as the resolution and frame rate. It’s also about the quality of pixels—color accuracy, contrast, and the just right amount of brightness are equally critical to making experiences more immersive.
Increased definition and sharpness
Reduced blurring and latency
More realistic colors through an expanded color gamut, depth, and temperature
Increased detail through a larger dynamic range and lighting enhancements
Sampling rate: Increased sampling rates to match human hearing
Precision: Increased bits-per-sample for improved audio fidelity
3D surround sound: Accurate 3D capture and playback of audio
Clear audio: Zoom and focus on the important sound while filtering out the noise
Seamlessly interact with devices through intuitive interfaces, such as gestures and voice.
Devices intelligently interact with us and provide personalized experiences based on context.
Enabling immersive experiences within the power, thermal, and performance constraints of mobile devices is challenging. The optimal way to enhance the broader dimensions of immersion requires an end-to-end approach, heterogeneous computing, and utilizing cognitive technologies.
Taking an end-to-end approach means thinking holistically at the system level, understanding all the challenges, and working with other companies in the ecosystem to develop comprehensive solutions. For example, the end-to-end approach is essential for maintaining color accuracy—one of the key aspects of pixel quality—from camera to display.
Heterogeneous computing uses specialized engines across the SoC to address the processing requirements of immersive experiences at low power and thermals. For example, image processing tasks like computational photography use the majority of the processing engines.
Cognitive technologies, like machine learning and computer vision, can make experiences more immersive. They enable devices to perceive, reason, and take intuitive actions so that devices can learn our preferences, personalize our experiences, and enable intuitive interactions. For example, a cognitive camera is designed to improve visual experiences by automatically capturing better pixels. By understanding the scene through machine learning and computer vision, your device has the ability to automatically configure the camera settings, such as exposure time and white balance.
To be truly immersive, virtual reality must stimulate our senses with realistic feedback. It can make everyday experiences, like playing games, watching movies & sports, video conferencing, and virtual travel, even more immersive.
Virtual reality places extreme requirements on several dimensions of visual quality, sound quality, and intuitive interactions. For example, we need tremendous pixel quality and quantity because the screen is so close to the eyes. We need realistic 3D positional audio so the sound is accurate to the real world. And we need interfaces so intuitive and responsive that you don’t realize you are even dealing with an interface. By focusing on all of these dimensions, a fully immersive experience can be achieved.
Qualcomm Technologies is uniquely positioned to enhance the broader dimensions of immersive experiences by custom designing specialized engines across the SoC and offering comprehensive ecosystem support. Snapdragon processors are designed to provide an optimal heterogeneous computing solution by taking a system approach. We provide development and optimization tools to the ecosystem, designed to enable content creation and optimized devices.
Webinar - Making Immersive Virtual Reality Possible in Mobile
5 Apr 2016
Webinar - The New Era of Immersive Experiences - What’s next
24 Ags 2015
Webinar: The Next-Gen Technologies Driving Immersion
13 Feb 2017
Virtual Reality has been touted for the past several years as the next big thing – and its history goes back even further than many of us realize (the first prediction of VR goes back to a science fiction story from the 1930s!) – but now we may have reached an inflection point for VR.
With the advances in technology fueled by the mobile industry, much of what was considered sci-fi (even in the previous incarnations of VR) is now becoming reality. Life-like visual and audio processing, movement and positional tracking as well as haptic and integrated sensory feedback are realities today making VR immersive in ways only imagined before.
Creating immersive VR experiences involves bringing together these interactive technologies that are intuitive for the user – so it feels like you are there, practically reaching out and grabbing the controls of that virtual vehicle. And perhaps not surprisingly, we expect that the best VR experiences will be built on mobile technologies to offer people a truly untethered experience. This means that the devices we are using become a part of the world we’re immersed in instead of distracting from it.
During CES Qualcomm Technologies, Inc. demonstrated “Power Rangers: Zords Rising”, an immersive mobile VR experience that allowed users to gear up and experience what it’s like to be part of the Power Rangers team. This demo highlighted the power of the new Qualcomm Snapdragon 835 processor, which is designed to deliver immersive VR and augmented reality (AR) experiences.
For instance, the new Snapdragon 835 processor is engineered to support 6-degrees of freedom movement – the ability to translate through the virtual environment forward/backward, up/down, left/right and with pitch, roll, and yaw, crucial for creating a realistic sense of being inside the virtual world. And with real-world movement one needs life-like visual processing to deliver the smooth, visually rich experiences similar to our own natural vision. This is why we built sub-18 millisecond latency and 4K display at 60 frames per second in the Snapdragon 835. Likewise, in the real world, sound has a three dimensional profile that we use to orient ourselves. Therefore, the Snapdragon 835 supports 3D audio – VR has never sounded so good.
These are no small computational tasks. You might imagine (or have seen elsewhere) that achieving this degree of processing performance needs large, power hungry processors. The new Snapdragon is designed to deliver superior GPU and CPU performance per watt while also being 35% smaller than its predecessor. The smaller size and better performance means better immersion inside the virtual world and less distraction caused by the VR hardware in the real world – both are important when delivering compelling virtual experiences.
All of this is good news for developers, and even better news for those people who will be trying VR for the first time, as it means they won’t experience distractions like jerky movement, lag, or low resolution. And when built into an untethered, mobile experience – such as a headset – you can minimize the potential for device discomfort that would come from excess weight, heat or protruding wires.
Snapdragon processors and toolsets are designed to provide multi-processor computation coordination and energy management so that you can offer both an engaging VR performance as well as optimal device comfort.
When we put on our VR headsets, we gaze out at a world of opportunity for you to start developing your own VR experiences using the right immersive technologies to draw in your users and leave them craving more.
Are you ready to dive in and develop your own VR experience? Download our white paper, "Making Immersive Virtual Reality Possible in Mobile" to learn more; and be sure to have a look at the Snapdragon VR SDK.
For more ideas, take a look at Qualcomm’s announcements from CES.
An assistant that comes to life in a mixed reality headset and works with you to build out your schedule. A self-driving car that can calculate the speed and distance needed to safely drive through a yellow light.
Sound futuristic? Well, these things might not be so far off.
These ongoing projects were among an array of game-changing technologies revealed at today’s Wired Business Conference, where luminaries from Facebook, Magic Leap, General Motors, among others discussed how they’re creating the next generation of virtual reality and artificial intelligence, and how advances in both technologies will change the way we live, work, and interact with one another.
Rony Abovitz, CEO of secretive VR startup Magic Leap, may have stolen the show when he announced a partnership between his company and LucasFilm to bring “Star Wars” to VR and your living room, but Abovitz had much more to say about the future of virtual reality and how it will soon serve every facet of our lives.
“Our goal is to get to all-day, everyday computing,” he said. “It’s like a full-course meal. People want to eat ice cream first, but we’ll give them salad and appetizers too — all of the nutrients that make your day.”
By this he means filling your entire day — not just when you seek to escape — with “mixed reality,” the union of virtual and physical worlds where we can interact with virtual objects and even characters. Abovitz imagines not only a digital assistant overlaying a schedule on your actual bedroom, but also being able to host a real-time conversation with a friend or loved one as if she was right in your living room with you.
“We’re dedicated to maintaining sacred spaces and avoiding the AR/VR dystopia that we’re all afraid of,” he said.
Machine-learning and artificial intelligence also took center stage at the conference. From Crisis Text Line’s use of machine learning to identify and prioritize troubling language to General Motor Company’s autonomous vehicles and Facebook’s development of bots that understand common sense, a trio of speakers detailed the ground-breaking efforts to replicate human neural networks in machines.
“The next step is using data to get computer to understand how world learns,” said Yann LeCun, Facebook’s Director of Artificial Intelligence. “This is predictive learning.”
“Once we figure out predictive learning, AI will make another big jump,” he added.
To get there, researchers are focusing their efforts on training computers to think and react like humans do, which isn’t so easy. Computers need reams of data and human correction in order to execute functions that we mindlessly perform, like altering course when a dog is in the road or detecting voice tones. That means we’re very much still in the picture, speakers such as LeCun, GM’s CEO Mary Barra, and Facebook’s Vice President of Messaging Products David Marcus reiterated, and the next stages of AI development will be a human-machine collaboration.
“We don’t need to hand-craft everything,” explained LeCun. “We build architecture, train it with data, and let it adapt.”
“Those techniques are the big hammer and now we can use them on any nail.”
Learn more about Qualcomm and its virtual reality initiatives.