OnQ Blog

We are making AI ubiquitous

Jun 12, 2020

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

We envision a world where devices, machines, automobiles, and things are much more intelligent, simplifying and enriching our daily lives. They will be able to perceive, reason, and take intelligent actions based on awareness of the situation, improving just about any experience and even solving problems considered unsolvable.

Artificial intelligence (AI) is the technology driving this revolution. You may have heard this vision or may think that AI is all about big data and the cloud. Yet, Qualcomm Technologies’ solutions already have the power efficiency and performance to run powerful AI algorithms on the actual device, which brings new opportunities to run AI on the device, in the cloud, or distributed between both for more optimal solutions.

AI is a pervasive trend that is rapidly accelerating thanks to vast amounts of data and progress in both algorithms and the processing capacity. New technology can seem to appear out of nowhere, but oftentimes researchers and engineers have been toiling over it for many years before the timing is right and key progress is made.

At Qualcomm, we have a culture of innovation. We take pride in researching and developing fundamental technologies that will change the world at scale. AI is no different. We started fundamental research more than 10 years ago, and our current products support many AI use cases, such as enhanced photography, speech recognition, and malware detection — for smartphones, automotive, and IoT. We are researching broader topics, such as power efficiency, efficient learning, personalization, and applied AI (such as, AI for wireless connectivity or AI radar).

We have a deep machine learning heritage

We have a long history of investing in machine learning. In 2007, we started exploring spiking neuron approaches to machine learning for computer vision and motion control applications.  We then expanded the scope of the research to look not just at biologically inspired approaches but artificial neural networks — primarily deep learning, which is a sub-category of machine learning. Time and time again, we saw deep learning-based networks demonstrating state-of-the-art results in pattern-matching tasks. A notable example was in 2012 when AlexNet won the prestigious ImageNet Challenge using deep learning techniques rather than traditional hand-crafted computer vision. We’ve also had our own success at the ImageNet Challenge using deep learning techniques, placing as a top-3 performer in challenges for object localization, object detection, and scene classification.

We have also expanded our own research and collaborated with the external AI community into other promising areas and applications of machine learning, like recurrent neural networks, object tracking, natural language processing, and handwriting recognition. In September 2014, we opened Qualcomm Research Netherlands in Amsterdam, a hotbed for machine learning research, and have continued to work closely with Ph.D. students around the globe working on forward-thinking ideas through our Qualcomm Innovation Fellowship program. In September 2015, we established a joint research lab with the University of Amsterdam (QUVA) focused on advancing the state-of-the-art in machine learning techniques for mobile computer vision. 

We further deepened our relationship with Amsterdam’s AI scene by acquiring Scyfer, a leading AI company in Amsterdam, in 2017. Max Welling, a Scyfer founder and a renowned professor at the University of Amsterdam, where he researches machine learning, computational statistics, and fundamental AI research, joined Qualcomm Technologies Netherlands B.V. as part of the acquisition. This led to the formation of Qualcomm AI Research, a cross-functional, cooperative imitative that encompasses all of the cutting-edge artificial intelligence (AI) research taking place across the company. This investment is already paying off as our AI researchers are making breakthroughs across the entire AI spectrum, from fundamental research to applied AI.

Power efficiency is essential to scale AI

To make our vision of intelligent devices possible, we know that many machine learning-based solutions will need to run on the device — whether a smartphone, car, robot, drone, machine, or other thing. Running the AI algorithms — also known as inference — on the device versus in the cloud has various benefits, such as immediate response, enhanced reliability, increased privacy, and efficient use of network bandwidth.

The cloud, of course, remains very important and will complement on-device processing. The cloud is necessary for pooling of big data, training neural network models, and running certain inferences that are complex or depend on off-device data. However, in many cases, running inference entirely in the cloud will often have issues for real-time applications that are latency-sensitive and mission-critical like autonomous driving. Such applications cannot afford the roundtrip time or rely on critical functions to operate when in dynamic wireless coverage.  

In order to run inference on the device at scale, power efficiency is critical. Making neural network models smaller without sacrificing accuracy is one key component for improved power efficiency. Qualcomm AI Research is already paying dividends with leading research and state-of-the-art results in the areas of both quantization and compression, both of which improve power efficiency.

In addition, we do not think that the device should be limited to only running inference. We are also researching on-device training for targeted use cases such as gesture recognition, continuous authentication, personalized user interfaces, and precise mapping for autonomous driving — in a synergistic cooperation with the cloud. In fact, we have a unique ability to explore future architectures which can benefit from high-speed connectivity and high-performance local processing, resulting in the best overall system performance. For example, distributed learning over wireless is moving some, or even most, of the training to the devices today think of suggested queries on the Google Keyboard as a notable example.

Running efficient on-device AI requires heterogeneous computing

For over a decade, Qualcomm Technologies has been focused on efficient processing of diverse compute workloads within the power, thermal, and size constraints of mobile devices. Qualcomm Snapdragon Mobile Platforms have been the SoC of choice for the best-in-class mobile devices. AI workloads present new challenges in this regard. Our research into emerging neural networks has made us well positioned to evolve and extend our heterogeneous computing capabilities to address future AI workloads with a focus on maximum performance per watt. In fact, we envisioned dedicated hardware for running AI efficiently back in 2012.

By running various machine learning tasks on the appropriate compute engines — such as the CPU, GPU, and DSP — already in our SoC, we offer the most efficient solution. A key example is the Qualcomm Hexagon DSP. It was originally designed for multimedia workloads that were vector math-intensive, but it has since further evolved and been enhanced to efficiently run AI workloads. In fact, the Hexagon DSP with Hexagon Vector eXtensions (HVX) and Hexagon Tensor Accelerator (HTA) on Snapdragon 865 has been shown to offer significant improvements in energy efficiency and performance when compared against running the same workloads on the CPU.

Diversity in compute engine architecture is essential, and you can’t rely on just one type of engine for all workloads. That’s why our Qualcomm AI Engine, which is on its 5th generation in Snapdragon 865, consists of the CPU, GPU, and DSP. This flexibility allows the Qualcomm AI Engine to offer high performance per watt for both popular neural network architectures as well as novel ones.

The Qualcomm Snapdragon Ride Platform combines hardware, software, open stacks, development kits, tools, and a robust partner ecosystem to help automakers deliver on consumer demands around improved safety, convenience, and autonomous driving. It is also designed for power and thermal efficiency since battery life and heat dissipation can drastically affect the cost, design, and user experience of automobiles.

Power efficiency isn’t just important for on-device AI. Data centers that run AI training and inference consume massive amounts of energy. That’s why we have taken our expertise in the design of power efficient on-device AI inference to engineer a solution targeted for the cloud: Qualcomm Cloud AI 100.

We are democratizing AI

It is not enough just to have great hardware. Making it easy for developers to take advantage of heterogeneous computing is challenging but essential. To bridge that gap, we have introduced the Qualcomm Neural Processing Software Developer Kit (SDK). This features an accelerated runtime for on-device execution of convolutional neural networks (CNN) and recurrent neural networks (RNN) — which are great for tasks like image recognition and natural language processing, respectively — on the appropriate Snapdragon engines, like the Qualcomm Kryo CPU, Qualcomm Adreno GPU, and Hexagon DSP. The same developer API provides access to each of our engines so developers can easily switch their AI tasks from one to another.

The SDK also supports common deep learning model frameworks, such as TensorFlow, PyTorch, and ONNX. The SDK is designed to be lightweight, flexible, and deliver optimal performance per watt by leveraging Snapdragon technology. In addition, the SDK is designed to enable developers and OEMs in a broad range of industries, from health care to automotive, to run their own proprietary neural network models. We are amazed and encouraged by the applications that the AI ecosystem has developed on our AI Engine in a short amount of time. We look forward to collaborating more with the AI ecosystem to introduce transformative experiences and enhance our everyday lives.

The 5G era with distributed intelligence

AI and 5G are synergistic, working together to revolutionize industries and enable new experiences. In a hyperconnected world in which virtually everyone and everything are connected, data needs to be processed close to the source in order to scale and meet immediacy, privacy, and security requirements. We envision AI processing being distributed across the central cloud, the edge cloud, and the device depending on the use case requirements. The AI processing that happens on, or close to the device, is what we call the intelligent wireless edge.

Applying AI not only to the 5G network but also the device will lead to more efficient wireless communications, longer battery life and enhanced user experiences. For example, AI will improve the 5G end-to-end system through increased radio awareness, which will bring a variety of improvements, such as enhanced device experience, improved system performance, and better radio security.

The low latency and high capacity of 5G will also allow AI processing to be distributed amongst the device, edge cloud, and central cloud, enabling flexible system solutions for a variety of use cases. For example, low latency will enable personalized shopping experiences, the control of machinery in the factory of the future, and real-time voice translation. In essence, on-device AI processing will be augmented by the cloud and edge cloud over low latency 5G.

AI and 5G will also work hand in hand to realize distributed learning over wireless. We see fully distributed AI with lifelong on-device learning that allows for the next level of personalization with privacy.

This is not just a vision. 5G is here today and operators are deploying edge cloud with computing capabilities. Qualcomm Technologies is accelerating this roll out by offering comprehensive solutions for distributed intelligence, including power-efficient on-device AI processing with Snapdragon, leading Snapdragon 5G modem-RF systems, and the Qualcomm Cloud AI 100 for power-efficient AI inference in the cloud and edge cloud.

Continued AI research to create breakthroughs

We are in the early days of machine learning journey and deep learning is just one of many machine learning technologies that has the potential to transform computing. To pursue even more ambitious applications, we continue to advance on multiple fronts, from specialized hardware architectures and advanced algorithms to applied AI. Our fundamental AI research, like gauge equivariant CNNs and Bayesian deep learning, as well as applying that research is making it possible to do more applications on the device, like voice UI.

The opportunities for always-on intelligence where all or most of their thinking happens on the device are enormous, and we look forward to advancing state-of-the-art machine learning through both research and commercialization. Today, our AI solutions provide highly responsive, highly secure, and intuitive user experiences through power-efficient machine learning. The future with distributed learning over wireless holds promises to improve personalization while preserving privacy.

Stay tuned for future blog posts about the future of AI, our research breakthroughs, and applications of machine learning. If you are excited about solving big problems with cutting-edge technology and improving the lives of billions of people, we’d like to hear from you.


Qualcomm Adreno, Qualcomm Hexagon, Qualcomm Kryo, Qualcomm Snapdragon, Qualcomm Cloud AI, Qualcomm Snapdragon Ride, and Qualcomm Neural Processing SDK are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.



Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries ("Qualcomm"). Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.

The OnQ Team

©2021 Qualcomm Technologies, Inc. and/or its affiliated companies.

References to "Qualcomm" may mean Qualcomm Incorporated, or subsidiaries or business units within the Qualcomm corporate structure, as applicable.

Qualcomm Incorporated includes Qualcomm's licensing business, QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of Qualcomm's engineering, research and development functions, and substantially all of its products and services businesses. Qualcomm products referenced on this page are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Materials that are as of a specific date, including but not limited to press releases, presentations, blog posts and webcasts, may have been superseded by subsequent events or disclosures.

Nothing in these materials is an offer to sell any of the components or devices referenced herein.