Recent advancements in cognitive technologies—particularly machine learning—have propelled numerous innovations in areas such as image and facial recognition, natural language processing, and big data mining. IBM’s Watson helping fight diseases, Facebook’s automatic photo tagging, and the voice recognition you use with Apple’s Siri or Google Now are great examples of the benefits that machine learning techniques can deliver.
But for the most part, the intense compute requirements of machine learning restrict its implementation to large data centers. A typical computing environment for deep learning needs teraflops of compute power, tens of gigabytes of RAM memory, and hundreds of watts of electric power—nothing like the smartphone, tablet or PC you are using to read this blog.
Running a deep learning algorithm in a mobile device is not short of obstacles. The implementation not only has to consider constraints in compute speed, memory and power compared to its server relatives, but also restrictions derived from thermal limits.
Qualcomm Research, a division of Qualcomm Technologies, Inc., is using its expertise in low-power embedded computing to solve the challenges of this new workload, and the effort is paying off. The objective is to run deep neural networks in real time, directly within a phone, car or personal drone, giving devices the ability to perceive, reason and act—and helping bring the benefits of so-called cognitive technologies to everyday life.
Executing machine learning on the device brings many benefits. First, security and privacy are better protected since sensitive data does not need to travel to the cloud to be processed. Responsiveness and resiliency increase because applications don’t have to wait for data travelling back-and-forth through the network. Also, a reduced need for connectivity leaves more bandwidth for other applications and minimizes usage of data plans.
Modern voice recognition systems, for example, usually run fine in your phone using a cloud-assisted approach because speech requires very little data to be processed. But things get trickier when applications need to make decisions in real time—let’s say adjusting camera settings to focus on the faces of the people you care about… or helping your hobby drone navigate around trees using its video feed—the time it would take sending and processing photos and videos through the cloud would make this impossible. Processing the information natively—on the device—is the way to go, assuming the technology overcomes the thermal and energy constraints of the mobile environment. Qualcomm Technologies demonstrated this capability earlier this year when it showcased the Qualcomm Zeroth cognitive computing platform running on Qualcomm Snapdragon processors.
Above: Real-time scene recognition using on-device machine learning capabilities of the Qualcomm Zeroth platform.
Developing efficient execution of deep learning algorithms in a mobile SoC required major technical achievements and end-to-end optimizations. For example, the research the team was able to apply and further optimize current cutting edge academic research to compress a network to 1/10th of its original size, reducing the need for on-device compute operations and memory requirements while having minimal impact in the classification accuracy. Furthermore, the use of 16-bit values instead of 32-bit floating-point representations allowed for even smaller learning networks with no impact in the measured performance.
The team then implemented a heterogeneous computing runtime for neural networks on a Snapdragon mobile processor. The engineers had to make best use of the available compute engines in the chip, running as many tasks as possible in parallel. Using custom-developed Qualcomm Basic Linear Algebra Subprograms (QBLAS) and the MARE Parallel Computing SDK, developed by Qualcomm Research on its Silicon Valley labs, the team was able to efficiently parallelize the neural computation workload.
The research group is now working to take deep learning-based visual perception to new levels, exploring classification of video feeds in real time and investigating the use of dedicated machine learning engines in mobile processors.
Advancements in real-time, on-device visual perception can help your phone search your library for pictures or videos of your ex-girlfriend or ex-boyfriend you want to delete or keep private. The technology can also assist home security cameras quickly distinguish your pet from a burglar, avoiding some of the usual false alarms triggered by traditional motion-detect techniques while protecting your privacy.
On-device visual perception really shines in those instantaneous decision-making use cases where a cloud-assisted approach is definitely not an option. For example, allowing a car to alert you about a distracted pedestrian crossing the street, or helping your hobby drone stay away from walls and trees.
The following video gives you a taste of the things to come. It shows a research test to improve drone safety, where an on-device algorithm analyzes stock scenes captured by the flying cameras and determines in real time how safe or unsafe the course ahead is.
Above: Qualcomm Research tests real-time video perception using on-device deep learning to improve drone safety.
All these developments show how the benefits of machine learning will not just transform things in the cloud. They will also extend to all kinds of devices at the edge of the network, making machines much more intuitive in the future, and promising to simplify life for all of us.