May 5, 2014
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
Contrary to popular belief, discoveries don’t occur spontaneously. Akin to Newton’s “falling apple” or Archimedes’ “Eureka,” they are the culminating “sparks” of well-trained minds, backed by profound knowledge and painstaking work, searching for answers. Qualcomm Research and Development’s participation in a recent ground-breaking discovery, which was published in the prestigious Nature journal, explaining how the mammalian brain detects movement, was no different. It was the result of hard work, collaboration with the best minds in the field, such as Prof. Sebastian Seung of MIT (now at Princeton), and support by Qualcomm Technologies, Inc.’s cutting-edge Zeroth suite of neuromorphic engineering tools.
The folks at Qualcomm Research pride themselves in solving big challenges. One such challenge is to understand the human brain—the most complex and multifaceted product of evolution. We are on a quest to map its interworking and apply the same techniques to computing. We call this endeavor the Zeroth project.
Going back to the discovery, it synthesizes the conventional theories, and explains how the eye detects movement. Let me explain. The eye, specifically the retina, is a giant “wall” of tiny cameras (visual sensors), each connected to the brain through a set of neurons (the basic processing elements of the brain). These neurons only fire (spike) if and only if anything changes in “view” of each of their respective sensors. Otherwise they stay quiet. This seems like a simple and logical thought, but when you contrast it with how today’s computational “vision” works, you realize the profound impact this could have on the future of cameras.
Today’s visual processing involves dividing the whole view (i.e., frame) into many small pixels (for example, a 10 MP camera has 10 million pixels); scanning all the pixels several times per second (e.g., 24 frames/sec rate), analyzing all of that information; and storing the information.
The video compression algorithms then run through all the frames and capture the baseline and difference of each pixel (between consecutive frames). As you can imagine, all of this amounts to a lot of processing and data crunching. Even if there is no movement/change in the view, you still have to go through the whole process. Compare that to how the retina works—only spiking and generating highly relevant but compact information for movement.
That means the amount of information to be processed is very minimal and very energy efficient. This can be illustrated by the image shown below from a retina-like camera, built by Inilabs. Gray dots in the image indicate no information/movement, and white/black dots indicate movement. The traditional camera would have to scan and process the whole frame many times to understand the image, but the retina only works on the sensors (pixels) that detect movement (which is a small part of the image).
Just to clarify, the Inilabs camera emulates a type of neuron that detects luminance (brightness) changes in one location. Our discovery goes beyond, and explains the mechanism of movement detection that starts in one location and ends in another. Nonetheless, the picture taken from the Inilabs camera vividly illustrates the superiority of retinal approach for visual processing.
So, you might ask, what’s the big deal? Well a retina-like camera can be orders of magnitudes faster, less process intensive and more battery efficient than the traditional approach, enabling a new class of apps/services, and improving existing ones. Moreover, mimicking the retina is one small but important step toward building computing devices that are able to “see” and “act” as human brains do.
As part of this pioneering work, we also built mathematical models and algorithms that emulate the retina and neurons for the whole scientific community to use and exploit for further research and testing—just what we envisioned for Zeroth from the beginning! Come back to OnQ to learn the latest and greatest about brain-inspired computing. In the meantime, check out the article in Nature.