Jun 5, 2015
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
The next revolution will be robotic, just not in the way you think. We’re not talking about Terminators, but rather about the robotic gadgets and companions that will one day integrate seamlessly with everyday life, handling tasks once too time-consuming or onerous for us to fathom today. But how do we get there? Chad Sweet, director of software engineering for robotic applications at Qualcomm Technologies, Inc. (QTI), explains how our current mobile reality will translate into our robotic future.
When the average person thinks about a robot, they think about Rosie from the Jetsons. But that sort of human-like robot isn’t really Qualcomm’s focus in robotics, right?
Humanoids are not something that we're actively pursuing as a strategy. What we see as the real opportunities in the near term are UAVs (unmanned aerial vehicles), service robots, community development platforms, and robots for education.
[Editor’s note: Check out this perspective on our sister blog, Spark, titled Robot evolution from the lab to the living room.]
Why drones? That area seems rather far afield from Qualcomm’s mobile core.
The goal is to try and leverage all the development we've done in the smartphone space around Qualcomm Snapdragon processors towards UAVs. With all of the computing power—dedicated Wi-Fi, GPUs, positioning circuitry, high-powered quad core processors, signal processors to do real time actuation—we can create a platform that is substantially smaller, with far more capabilities than what's typically found today. It's more functionality in a smaller package. The big reason that's important is that the smaller UAVs are, the lighter they can be. The smaller they can be, the safer they are—and the more consumer friendly.
Do you have particular consumer applications in mind?
The big one now that everybody is buying these for is the flying camera, for the flying selfie, and the “follow-me” selfie UAV. Where we want to take this, though, is to the applications that will benefit society in the long term. There's tons of information around how applicable UAVs are to agriculture and site inspection of things like bridges. Worldwide, the opportunities vary from country to country. In fact, having a high-powered onboard processor and all the computer vision capabilities that we've built into QTI chips will really help for inspection.
There are already systems around to do those types of tasks—inspecting crops and bridges—what’s the benefit of using a UAV?
With the type of stuff we're doing on our processors a farmer, for instance, should be able to get a lot of data real-time. As they are flying, they can start high and take a general look, and if something looks suspicious they can go zoom in, or zero in lower, and actually fly lower, and try and inspect things in what they think are trouble areas. All that can happen in a single flight, without having to go back, process, look at their computer, figure out what's going on, and then go do it again the next day.
The idea of delivery drones is hot right now. How does all this jibe there?
Having this type of computing platform onboard, paired with LTE or WAN communications, will enable autonomous use cases that fly beyond line of sight. Whether that becomes more the delivery drone use case or not, these advanced capabilities need to be built in. You're going to need onboard processing, and you're going to need to be able to communicate with this thing beyond what your remote control can typically do.
That all makes perfect sense for flying robots, which have to be responsive and light, but how does mobile technology apply to larger service robots, like butlers or hotel concierges?
Generally, the low-power characteristics of our processors become more important. A lot of that is due to the fact that a service robot—a hotel concierge robot, or a security guard robot—may be doing a lot of sitting around. The hotel concierge bot could be sitting there for 20 minutes at a time before somebody comes in, for instance. Then when it moves into action, all the computer vision and the high-performance computing power become directly applicable.
One of the things we've been actively doing is applying [Qualcomm] Zeroth to robotics. What it does is allow the robot to learn and personalize itself, or adjust itself to the environment. We’ve actually run a couple demonstrations to prove out this idea: One was a toy-sorting robot, which is a very difficult computer vision task that you’d normally need to run in the cloud, but that we can run in real time on our processors. Another one is a facial learning system, which allows, say, a concierge robot to learn a face on the fly and then learn information about you—for example, that you like Diet Coke.
So there’s a real thought process involved?
The common paradigm to describe autonomous robots is "sense, think, and act." It senses the world; thinks based on what it senses; and then it acts. Then it circles back and acts again. It’s an infinite circle that creates the autonomous robot.
How do all these technologies translate to robotics for education?
We think Snapdragon is a great platform for education, which is what led us to our collaboration with the FIRST Tech Challenge (FTC). There's a very low intimidation factor developing for smartphones. Every high school student has got one in their pocket. They're not scared of it. It's not the same thing as some circuit board with a bunch of wires and things sticking out. They literally can pick it up, plug it in with USB to their computer, and start developing robots right away. That's really pretty cool.