Qualcomm Technologies has been working on audio technology for next generation television broadcasts using the new MPEG-H standards. The audio technology is designed to help content creators, content hosts, consumer electronics manufacturers, and broadcasters create, capture, and render true-to-life immersive 3D audio experiences and scene-based audio so the viewer feels immersed in sound.
At the upcoming National Association of Broadcasters (NAB) show, we’ll showcase a comprehensive live production of immersive audio for both traditional TV and VR. The production will use scene-based (Higher Order Ambisonic, or HOA) and object-based audio, while the transmission will be using the MPEG-H 3D audio standard. The audio production will use multiple audio sources, including ambisonic and spot microphones. The video production will be through traditional TV cameras as well as an Omnicast VR camera. The audio production can simultaneously feed into both OTA (for linear TV) and OTT (for both linear TV and VR consumption) transmission.
The production process will show monitoring using sound bars as well as immersive loudspeaker layouts. The playback process will show flexible rendering to any number of loudspeakers, audio-rotation over loudspeakers for 360 video, as well as live VR overhead mounted displays.
Qualcomm Technologies will also showcase its high-quality HEVC cloud and server-based encoder, which is engineered for over-the-top services and 4K real-time encoding with multi-threading on a single machine. The high-quality HEVC encoder has significantly lower complexity than x265 for the same coding efficiency.
Also at our booth (#SU11013) our friends at b<>com will demonstrate a live VR feed combining scene-based audio with a multiple camera VR system. HOA scene-based audio encoded using MPEG-H will be binauralized and delivered to a VR headset, where the essential component that scene-based audio provides to VR can be experienced with head-tracking. You can learn more about b<>com at NAB at booth #N2035-FP.
Check out the video below for a quick overview of scene-based audio, and please wear headphones for the best binaural audio experience:
Silicon Valley met Switzerland at this year’s Baselworld, the world’s premier event for the watch and jewelry industry, which celebrated its 100th anniversary this year. Several impressive smartwatches made their debut, all touting the Qualcomm Snapdragon Wear 2100 Platform and all powered by Android Wear 2.0. With this reliable platform and OS developed specifically for wearables, it’s no wonder high-end brands are looking beyond basic wearable functions, and combining style with technology to develop chic smartwatches fit for any lifestyle.
The superior SoC for smartwatches, Snapdragon Wear 2100, is an integrated, ultra-low power sensor hub. It’s 30 percent smaller than previous-generation wearable SoCs, allowing OEMs the freedom to develop thinner, sleeker product designs. And because it uses 25 percent less power than its older sibling (the Snapdragon 400), watchmakers can offer even more features and better designs.
The Snapdragon Wear 2100 comes in both tethered (Bluetooth and Wi-Fi) and connected (3G and 4G LTE) versions. The latter allows wearers to do more with their wearables, from streaming music to sending messages to calling a cab, in tandem with — or even without — having to bring their smartphones along.
Each of the touchscreen smartwatches included in this roundup run Android Wear 2.0, Google’s latest wearable operating system, and can pair with both iOS and Android phones. With Android Wear 2.0, users can personalize their watch faces with chronometer-style complications and create shortcuts to their favorite applications. In addition to the pre-installed Google Fit and calendar apps, more apps can be downloaded directly through the on-watch Google Play store, so wearers can customize their device to their lifestyle.
Android Wear 2.0 brings the Google Assistant to your wrist. Find answers and get things done even when your hands are full. Reply to a friend, set a reminder, or ask for directions. Just hold the power button or say “OK Google”.
Check out the some of Snapdragon Wear powered smartwatches that made a splash at this year’s Baselworld:
Machine learning, at its core, is a method by which we can turn huge data into useful actions. Most of the attention around machine learning technology has involved super-fast data processing applications, server farms, and supercomputers. However far-flung servers don’t help when you’re looking to magically perfect a photo on your smartphone, or to translate a Chinese menu on the fly. Making machine learning mobile — putting it on the device itself — can help unlock everyday use cases for most people.
Qualcomm Technologies’ engineers have been working on the machine learning challenge for years, and the fruits of that work are evident in Qualcomm Snapdragon mobile platforms, which have become a leader for on-device mobile machine learning. It’s a core component of the Snapdragon product line, and you’ll see machine learning technologies both in our SoCs (820, 835, and some 600-tier chipsets) and adjacent platforms like the IoT and automotive.
And we aren’t pushing this technology forward by ourselves. We’re working with a whole ecosystem of tools, savvy OEMs, and software innovators to proliferate new experiences for consumers. These experiences use on-device machine learning, and we could not have conceived of them all by ourselves.
An exciting development in this field is Facebook’s stepped up investment in Caffe2, the evolution of the open source Caffe framework. At this year’s F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook’s open source deep learning framework, and the Qualcomm Snapdragon neural processing engine (NPE) framework. The NPE is designed to do the heavy lifting needed to run neural networks efficiently on Snapdragon, leaving developers with more time and resources to focus on creating their innovative user experiences. With Caffe2’s modern computation graph design, minimalist modularity, and flexibility to port to multiple platforms, developers can have greater flexibility to design a range of deep learning tasks including computer vision, natural language processing, augmented reality, and event prediction, among others.
Caffe2 is deployed at Facebook to help developers and researchers train machine learning models and deliver artificial intelligence (AI)-powered experiences in various mobile apps. Now, developers will have access to many of the same tools, allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile.
One of the benefits of Snapdragon and the NPE is that a developer can target individual heterogeneous compute cores within Snapdragon for optimal performance, depending on the power and performance demands of their applications. The Snapdragon 835 is designed to deliver up to 5x better performance when processing Caffe2 workloads on our embedded Qualcomm Adreno 540 GPU (compared to CPU). The Hexagon Vector eXtensions (HVX) in the Qualcomm Hexagon DSP are also engineered to offer even greater performance and energy efficiency. The NPE includes runtime software, libraries, APIs, offline model conversion tools, debugging and benchmarking tools, sample code, and documentation. It is expected to be available later this summer to the broader developer community.
Qualcomm Technologies continues to support developers and customers with a variety of cognitive capabilities and deep learning tools alongside the Snapdragon platform. We anticipate that developers will be able to participate in a wider and more diverse ecosystem of powerful machine learning workloads, allowing more devices to operate with greater security and efficiency.
We don’t yet know the full range of applications for the technology, but we can’t wait to see how it’s used by innovative developers around the world.
Sign up to be notified when the Snapdragon neural processing engine SDK is available later this summer.
Hardware-software convergence, or how hardware and software systems are working more closely together, illustrates how each are empowering (and sometimes literally powering) the other. And in our current development environment, this is happening more than ever. Of course, deep technical skills will be of the utmost importance to navigate this technological trend, but it is also the soft skills we apply to our engineering practices that are as important in determining our success.
What skills do developers need to nurture, and how do you put them to good use? In this piece, we’ll cover three soft skills developers can use to stay ahead of the hardware-software convergence, and share resources to help you grow and maintain those skills.
First off: Creative Inspiration. While it’s easy to identify your technical shortcomings and fill those gaps with training and practice, knowing which soft skills to hone can be a lot more complicated. In fact, you could even think of these soft skills as “mindsets,” since they’re more about how you approach a problem instead of just being a tool you use to solve it. For this first skill, it will be important to start approaching challenges antidisciplinarily, rather than relying on existing mental frameworks. That’s what being creative is all about – finding new ways of doing things.
So where do you start? Ask yourself this question: What is the dent you want to make in the universe? Begin from a place of passion – think about what problems and projects keep you up at night, and what issues big or small you want to solve.
Then, understand that creative inspiration is a process. What seems like overnight genius is often the result of many erroneous attempts (ex: Thomas Edison’s 1,000 or so attempts in creating the lightbulb) and then having the fortitude to gain deeper understanding of an issue to then apply your imagination. We particularly like the design thinking method, which encourages starting from a place of inspired empathy and developing knowledge through lean prototyping and iteration. The Stanford D.School has a Bootcamp Bootleg that you can download for a quick start guide to this design framework.
As our mobile devices have matured, gaining the ability to connect to the Web, we’ve labeled them as “smart.” But why settle for just smart? Harnessing the power of the Qualcomm Snapdragon 835 processor, developers, and OEMs are taking our devices to the next level, creating new experiences with the aid of machine learning. From superior video and security to your own personal assistant, your Snapdragon device has the ability to operate intelligently — outside of the cloud or Web connection — allowing you to experience your smarter phone in an entirely new way.
Application developers and device manufacturers understand what their users want. They can create a feature or an application that uses machine learning (more specifically, deep neural networks) to improve the performance a particular task, such as detecting or recognizing objects, filtering out background noise, or recognizing voices or languages. These applications are usually run in the cloud, and depending on the device they’re in, this could be sub-optimal.
The Snapdragon Neural Processing Engine SDK was created to help developers determine where to run their neural network-powered applications on the processor. For example, an audio/speech detection application might run on the Qualcomm Hexagon DSP and an object detection or style transfer application on the Qualcomm Adreno GPU. With the help of the SDK, developers have the flexibility to target the core of choice that best matches the power and performance profile of the intended user experience. The SDK supports convolutional neural networks, LSTMs (Long Short-Term Memory) expressed in Caffe and TensorFlow, as well as conversion tools designed to ensure optimal performance on Snapdragon heterogenous cores.
The Hexagon DSP and its wide vector extensions (HVX) offer an impressive power and performance mix for running neural networks on device. Performance is up to 8X faster and 25X more power efficient than using the CPU, which translates to lower battery consumption overall. In addition to support via the Snapdragon Neural Processing Engine, TensorFlow is directly supported on the Hexagon DSP, giving developers multiple options to run their chosen neural network power apps.
Here are a few applications that could be facilitated by Snapdragon 835 on-device machine learning tech:
Photography: Machine learning can aid in scene classification, real-time noise reduction, and object tracking, making it easier to take the perfect shot, or capture video regardless of the conditions.
VR/AR: With machine learning on your device, VR/AR feature can operate faster and with less lag, so everything from gestures and facial recognition to object tracking and depth perception are an immersive experience.
Voice detection: Your phone’s on-device AI can listen for commands and keywords to help you navigate the data and apps on your device more efficiently, and save power doing so.
Security: With facial recognition software and iris scanning, all operating independently from the cloud, your device can learn to identify, and help protect, you.
Connections: Your Snapdragon device has the ability to filter out distracting background noise during calls for clearer conversations with friends and family.
Qualcomm Technologies’ unique machine learning platform is engineered so devices powered by the Snapdragon 835 can run trained neural networks on your devices without relying on a connection to the cloud. Pretty innovative, right?
Take a look at our previous deep dives into each of the Snapdragon 835 key components — battery, immersive AR and VR, photos and video, connectivity, and security — all of which combine to make the Snapdragon 835 mobile platform truly groundbreaking.
And sign up to receive the latest Snapdragon news.