It’s been an exciting and productive year for artificial intelligence (AI) since Neural Information Processing Systems (NeurIPS) 2018. Intelligent devices and services have become increasingly integrated and significant in our daily lives, creating an ongoing demand for AI research and products. The NeurIPS conference, which is the largest annual gathering of AI researchers and engineers, is a time to share new discoveries, collaborate, and push the AI industry forward. Whether you’re attending this year’s NeurIPS conference or just curious about what Qualcomm AI Research has in store, read on to learn about our latest papers, demos, sessions, and other AI highlights.
At academic conferences like NeurIPS, novel papers are a primary way to contribute innovative and impactful AI research to the rest of the community. I’d like to highlight two accepted papers that are advancing our work in geometric convolutional neural networks (CNN) and Bayesian deep learning, respectively: A General Theory of Equivariant CNNs on Homogeneous Spaces and Combinatorial Bayesian Optimization using the Graph Cartesian Product. These two papers were written in collaboration with the University of Amsterdam, the QUVA lab (co-funded by Qualcomm Technologies, Inc.), and the research groups CIFAR and PCSL Lab.
This year, you can find us in booth #110, where we are bringing our AI research to life through live demonstrations. A few exciting demos for you to check out include:
- AI model quantization: Neural-network models can be very large and compute intensive, which can make them challenging to run on the end device. Simple quantization of a 32-bit floating-point model to 8-bits can result in accuracy loss. Qualcomm AI Research has developed techniques that can quantize a model without requiring datasets or re-training, but that preserve the model accuracy. Our side-by-side demo shows the benefits of our quantization techniques for real-time semantic segmentation — producing better segmentation results that are more reliable and complete compared to quantization with conventional techniques. In general, we see 3x to 5x faster execution of the quantized 8-bit model on the Qualcomm Hexagon DSP versus a 32-bit floating point model on the Qualcomm Kryo CPU.
- Toolkit demo: To make AI ubiquitous, the industry needs tools that minimize complexity and allow developers to easily develop and optimize AI applications. For our research to have the most impact we commercialize it, making it quickly available through software tools that allow apps to run more efficiently on the billions of Qualcomm-powered AI devices. With the Qualcomm AI Model Efficiency Toolkit, which was recently announced at the Qualcomm Snapdragon Tech Summit 2019, we are live demoing quantization on a variety of AI models. The toolkit integrates our latest quantization research so that developers can easily and automatically shrink their AI models to improve performance and power efficiency without sacrificing accuracy.
- AI model compilation: Taking advantage of AI hardware acceleration is key to achieving peak performance and power efficiency. The compiler is what schedules and maps a high-level programming language to the low-level instructions that run on hardware. Hand-tuning is not feasible for many of these complex AI models, so we’ve developed a compiler framework that uses reinforcement learning, the TVM compiler, and Hexagon NN to automate the process and efficiently map AI models to our hardware. Two main things to highlight are our development of a more efficient search space algorithm for autoTVM approaches and the use of TVM for user-defined operations. Our side-by-side demo shows the result of compiling a semantic segmentation model on our compiler versus TensorFlow. Our efficient machine learning compiler enables easy deployment of AI models that have custom operators to the Hexagon DSP. It also showcases our improved optimization algorithms that can be used to speed up auto-tuning for the custom operators to run on the Hexagon DSP.
- AI automotive: AI is fundamental to enhancing in-vehicle experiences as well as improving safety with Advanced Driver-Assistance Systems (ADAS) as cars move toward autonomy. Our autonomous driving research is focused on developing differentiated solutions in key technology areas. We are demonstrating precise positioning and localization, camera-based perception, radar-based object detection and velocity estimation, behavior prediction, and behavior planning.
- Customer demos: With our recently announced Qualcomm Snapdragon 865 Mobile Platform, our 5th Generation Qualcomm AI Engine offers more capabilities and processing performance than before, such as 15 Trillion Operations Per Second (TOPS). Check out applications, such as real-time voice-to-text, camera filters, and 3D avatars that are running completely on the device.
In addition to demos, we’re participating in several workshops, talks, and poster presentations as part of the NeurIPS agenda, including:
We hope to meet you at NeurIPS or future AI conferences to share our impact on AI.
At Qualcomm Technologies, we make breakthroughs in fundamental research and scale them across devices and industries. Qualcomm AI Research works hand-in-hand with the rest of the company to integrate the latest AI developments and technology into our products — shortening the time between research in the lab and delivering advances in AI that enrich lives.
If you’re excited about solving big problems with cutting-edge AI research — and improving the lives of billions of people — we’d like to hear from you. We’re recruiting for several machine learning openings. Join us to help create what’s next in AI.