OnQ Blog

The future of artificial intelligence lies on the edge: A Q&A with Gary Brotman about on-device AI

Artificial intelligence is not a new concept. What is new is that AI experiences that were once processed solely in the cloud are now running on-device, which allows for lowered latency, greater privacy protection, and increased reliability. This is the future of AI, posits Gary Brotman, Director of Product Management - Artificial Intelligence and Machine Learning for Qualcomm Technologies, and it will pave the way for myriad new experiences for users.

We spoke with Brotman about how on-device AI will impact various industries, and the challenge of convincing people that AI is helpful, not harmful.

[This interview has been edited for clarity and length.]

What technology is currently having the greatest impact on AI?

Over the past two years, we’ve seen a migration of neural networks — algorithms that more accurately and efficiently help devices match patterns and detect anomalies — from the cloud to the edge. We can now take large, complex neural networks and optimize them to run in constrained environments like mobile phones, speakers, and connected cameras. We have the software, runtimes, and libraries to enable this acceleration on device.

What excites you the most about on-device AI?

What excites me is having a device in my possession that is cognizant of my surroundings and my behavior, and can even augment the world around me. This makes the world around me more rich and compelling.

AI, the term and the execution, is becoming more acceptable to the general public, as opposed to some creepy sci-fi movie trope, and so much of it will be transparent to the end user. Actually, I shouldn’t say “transparent” because consumers shouldn’t need to be made aware that AI exists. AI really works when it’s seamless and provides a benefit without being noticed.

What’s the benefit of processing data locally versus in the cloud?

Traditionally, data has had to make the long roundtrip from device to cloud in order to, for example, answer a command from a digital personal assistant or work with the camera to classify an object like an animal, food, or landmark. Devices powered by Snapdragon, however, have the power to do some of the processing on device.

People are impatient, and we’re becoming increasingly more demanding of real-time performance. By doing as much of the processing on the edge — in this case, voice detection and recognition — and then leveraging connectivity to take advantage of the cloud, your digital assistant’s response will be instantaneous.

In addition to improved latency and faster responses, on-device AI helps to ensure your personal data will be more private. While people readily share information on social media all the time, they’re guarded when it comes to personal data and pictures. Those are files you do not, and should not, have to shuffle off to the cloud in order to experience AI-powered features. With on-device AI, those transactions can happen locally.

With regard to personal assistants, how will our interactions with them evolve?

If you use a smart assistant today, whether it’s in your phone or a smart speaker, the interaction is mechanical and far from being a conversation. In the relatively near future, devices will have larger vocabularies and the ability to pick up on tone and emotion. It won’t just be what you say, it’ll be how you say it. These are the nuances that will make the experience more human and real, as opposed to what it is today, which is fairly robotic.

Ultimately, for people to be comfortable enough to rely on their voice for control, they need to identify with this inanimate object as a companion, as opposed to a utility. That’s when AI starts to flourish: Pauses and gaps are replaced by real-time conversations that happen on and with the device.

How will on-device AI impact the IoT?

Take connected security cameras. Today, they can detect other objects. If they see somebody coming up to the door, they can detect if it’s a stranger or a known individual. There are also a lot of benign false positives that the cameras will pick up such as a tree moving or drastic lighting changes when a cloud obscures the sun. Consumers don’t want to be distracted or bothered by non-events, they only want to know about the real threats to their property and safety. Ultimately, on-device AI can help connected cameras do a better job of determining a real threat from something that’s not.

What are the challenges of convincing people that on-device AI is beneficial?

Consumers shouldn’t care if processing is done locally or in the cloud. They have to be introduced to something first to really experience why it’s better for them. In this case, people will just become more accustomed to better performance: If intelligence is resident in the device, processing will be faster and the end user experience will be seamless.

Our job as a technology provider is to continue to push the performance of our devices, and on-device AI is going to make that happen.

[editor's note: this post was updated on April 23rd, 2018]

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries ("Qualcomm"). Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.