OnQ Blog

Meet the Snapdragon Rover and Snapdragon Micro Rover [VIDEO]

Sep 18, 2014

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

In the past few years, Qualcomm Research has elevated its investment in bringing innovation to robotics. Last year, we took you inside the Qualcomm Research labs and showed you a robotics project powered by Qualcomm® Zeroth™—Brain-Inspired computing—which drew a lot of interest. Today we are unveiling more exciting Qualcomm® Snapdragon™-powered robotics work.

This morning at Uplinq 2014, Qualcomm’s annual developer conference, Qualcomm CEO Steve Mollenkopf demonstrated advances in robotics using two new robots we call the Snapdragon Rover and the Snapdragon Micro Rover.

Snapdragon Rover has learned to classify objects that it sees with its depth-sensing camera. The images from the camera are processed using brain-inspired machine learning techniques known as “deep learning,” which are also provided by Zeroth.

The Snapdragon Micro Rover is a DIY robot for enthusiasts—if you have access to a 3D printer and a smartphone with a Snapdragon processor, you can build it. The smartphone’s Snapdragon processor acts as the robot’s “brains.” Qualcomm Research has made the design tools available at www.qualcomm.com/robots.

Watch the video for the full demo, read on for more details.

More about Snapdragon Rover…

The Snapdragon Rover is built around the Qualcomm Snapdragon processor to provide an integrated, low-power solution for multiple robotics applications (computer vision, sensors, navigation and wireless communication). Zeroth is embedded, which enables deep machine learning. In the following demo, we have the Snapdragon Rover identify and classify our toys and put them away appropriately. The Rover first finds a toy, then identifies what type it is. After the user teaches the Rover which bin that toy goes into, the Rover learns how to classify the toy correctly and will remember which bin each specific type of toy should go into without having to be taught again. Eventually it will learn all the types of toys and put them away in the proper bins.

The Snapdragon Rover uses the Mobile Vision Platform to identify the bin and navigate to it. The Zeroth provided the deep learning classifier which identifies what the toy is. Here’s a closer look at those tools:

A depth-sensing camera is mounted underneath the base of the Snapdragon Rover. This camera, combined with computer vision algorithms, allows the Rover to find a toy.

Then, the robot’s Zeroth deep learning classifier identifies what the toy is.

When the Snapdragon Rover encounters a type of toy for the first time, it asks the user which bin it should be placed in.

The Qualcomm Vuforia Mobile Vision Platform is used to identify the bin and navigate to it.

The Snapdragon Rover has learned what to do with similar types of toys, so it can clean up these toys autonomously.

Learn more about Qualcomm Research and Robotics here.

PJ Jacobowitz

Senior Manager, Marketing

More articles from this author

About this author