OnQ Blog

Has the mobile revolution sparked the robotics renaissance?

6 Mar 2015

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

“Thirty seconds from history.”

In 2004, those four words signaled the start of the first DARPA Grand Challenge, a 150-mile race in which 15 of the most-advanced machines ever built drove out into the Mojave Desert—and crashed. Some didn’t even make it thirty seconds. 

But they did make history. It was the first time driverless cars had been tested on such a large scale, and on such a public stage. When those cars crashed, they crashed themselves—and that was a giant leap forward. The state of autonomous vehicles today is a direct result of the public interest, and private investment, generated by that first race. 

Or, Interest + Investment = Innovation.

The equation holds especially true for the mobile industry. Only, instead of Grand Challenges, mobile innovations over the past decade have been driven by consumers like you and me — by our collective desire to stay connected, to capture and share our favorite moments as they happen, and to cram all of the capabilities we can manage into a single gadget the size of a wallet (might as well cram the wallet in there too, while we’re at it). 

The “smart” part of our smartphones, the processor, is amazingly powerful and efficient. Add in stuff like cameras and imaging sensors, gyroscopes and accelerometers, altitude sensors, and long-lasting batteries—all of which are now very high quality at very low costs (because they’re manufactured for an enormous market)—and you’ve got a computing platform that’s ideally suited for autonomous robots, including self-driving cars. 

In the past, the cost of equipment was a huge barrier to entry for a would-be roboticist—both amateur and professional alike. Fortunately, that’s no longer the case. Robotic technology has evolved, and so too have the costs of materials. Drones are a great case in point. The explosion of interest around flying robots has inspired plenty of new ideas for applications that already promise to change everyday life. To deliver on that promise, powerful processors are required to integrate a broad number of capabilities. Until recently, having multiple processors doing many different, highly distributed tasks, was a pricey proposition, but now all of these can be integrated into a single mobile processor chip. In just under a decade, and running parallel to the robotics industry, advances in mobile technology have set the stage for what may become something of a robotics renaissance. 

Following the race in 2004, Tom Strat, deputy program manager of the DARPA Grand Challenge, was quoted by CNN as recalling that “Some of the vehicles were able to follow the GPS waypoints very accurately; but were not able to sense obstacles ahead…other vehicles were very good at sensing obstacles, but had difficulty following waypoints, or were scared of their own shadow, hallucinating obstacles when they weren’t there.”

For robots, then and now, perception is everything. The same features that make our lives more convenient—like accurate GPS for exploring a new city, or facial recognition that ensures our friends are always the focus of our photos—are indispensable to the future of robotics. Every environment a robot encounters is unfamiliar if it hasn’t been programmed in. Even then, if you were to move an object in that space, you’d have to reprogram down to the millimeter, for each change. Those constraints become a major issue if a robot’s purpose is to identify survivors in disaster zones, or to safely work side-by-side with humans in a factory.

Ideally a robot—whether a self-driving car, industrial machine, or surgical assistant—would be able to look out, understand its environment, and take some sort of action that makes sense within the context that it perceives. Before, that would require expensive laser rangefinders, additional processing capability, and a power source that could handle all of this. The costs could be in the tens of thousands of dollars. Today, for a fraction of the price we can adapt a mobile processor, with highly embedded visual-processing capabilities and a host of highly accurate sensors, and give our robot this wonderful ability to see—to detect obstacles, to follow paths in an environment, and to recognize objects. Best of all: With the advances we’ve made in areas like SLAM—Simultaneous Localization And Mapping—our robot can now localize itself in relation to the world that it perceives. It’s incredible, really, and it’s going to lead to all kinds of interesting new innovations. 

Today, if you want to go about building a robot, you can build or buy a 3D printer. You can download code and reference guides like the ones we’ve made for the Snapdragon Micro Rover.* You can buy affordable sensors or all-in-one development boards. The bottom line is that you can build a robot yourself. If you want to create the next innovation in robotics, all you need to start is the interest—and maybe a smartphone.

Snapdragon Micro Rover is a Qualcomm Research initiative. Qualcomm Research is a division of Qualcomm Technologies, Inc.

Anthony Lewis

Senior Director, Technology

More articles from this author

About this author