OnQ Blog

Teaching cars to see with AI [video]

Feb 20, 2020

Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.

The future looks bright for smart and safe transportation as the core technologies required for autonomous driving are developing rapidly. But what does it take for a car to see and safely navigate the world? At Qualcomm Technologies, we’ve been actively researching and developing solutions that lay the foundation for autonomous driving.  

Diverse sensors complement each other for autonomous driving

Just as humans perceive the world through their eyes, ears, and nose while driving, cars are learning to perceive the world through their own set of diverse sensors. These sensors, such as the camera, radar, lidar, ultrasonic, or cellular vehicle-to-everything (C-V2X), complement each other since each sensor has its own strengths.

Cameras, radars, and lidars have diverse strengths and complement each other.
Cameras, radars, and lidars have diverse strengths and complement each other.

Cameras are affordable and can help understand the car's environment, like reading text on a road sign. Lidar creates a high-resolution 3D representation of the surroundings and works well in all lighting conditions. Radar is affordable and responsive, has a long range, measures velocity directly, and isn’t compromised by lighting or weather conditions. A car can see best when utilizing all of these sensors together, otherwise known as sensor fusion, which is further discussed below.

Machine learning research to make radar more perceptive

Each of these sensors is becoming increasingly cognitive, allowing vehicles to better understand the world, so that they can navigate autonomously. One sensor that is expected to be invaluable for self-driving vehicles is radar. We wondered if we could make radar even more useful with AI.

With radar, the receiver captures reflected radio waves. Traditional radar algorithms reduce the received signal to a sparse point cloud, and then analyze it to draw conclusions about surrounding objects. The problem with this process is that a lot of details get lost in the data reduction. So, our AI research team set out to find a way to analyze the raw radar signal directly.

We found our solution in machine learning. By applying machine learning directly to the radar signal, we’ve improved virtually all existing radar capabilities, enhanced overall vehicle “sight,” and taught radar to detect both objects and their size. For example, we expect AI radar to be able to draw a bounding box around difficult-to-detect classes, such as bicyclists and pedestrians.

With our AI radar research, we expect new capabilities like drawing a bounding box around difficult-to-detect classes.
With our AI radar research, we expect new capabilities like drawing a bounding box around difficult-to-detect classes.

From this breakthrough achievement, AI and radar can help make a difference in split-second decisions on the road. Check out the video below and read our paper1Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors” to learn more.

Sensor fusion harnesses the value of diverse sensors

Beyond radar research, we’ve also explored sensor fusion for autonomous driving. Sensor fusion is not new for us. We’ve done sensor fusion for drone navigation, XR head pose, and automobile precise positioning. For autonomous driving, sensor fusion is about getting a precise and real-time understanding of the world to make critical decisions.

AI sensor fusion of diverse sensors allows for a robust understanding of the environment.
AI sensor fusion of diverse sensors allows for a robust understanding of the environment.

This can be very challenging in an automotive environment where there are many fast-changing variables, ranging from weather and road conditions to varying driving rules and speed limits. In the paper2Radar and Camera Early Fusion for Vehicle Detection in Advanced Driver Assistance Systems”, we showcase the results of early fusion of camera and radar data using AI. These two sensors are highly complementary – for instance, current radars can give absolute distance estimates, but not height of the object, whereas camera is exceptionally good at telling us object height for a given distance.

Traditional fusion algorithms perform late or object-level fusion; they detect objects in the two sensors separately, and then try to match them across the two sensors and fuse their properties. The drawback of this is that the object features are generally not available in the matching process, leading to poor matching and fusion output. In our approach, we start out with minimal feature extraction on both sensors and fuse the features early on, allowing the AI to use features from both the matching and the final fusion output. Thus, the complementary capabilities of the two sensors are used more effectively, leading to better detection of 3D objects.

Paving the road to autonomous driving

Improved perception and fusion from diverse sensors create a more robust understanding of the environment, leading to better path and motion planning. Our AI and automotive research teams work on all aspects of the ADAS full-stack to support a smoother driving experience.

Our AI and automotive research teams work on virtually all aspects of the ADAS full-stack.
Our AI and automotive research teams work on virtually all aspects of the ADAS full-stack.

Our research is not meant to stay in the lab. At Qualcomm AI Research, we quickly commercialize and scale our breakthroughs across devices and industries, reducing the time between research in the lab and offering advances that enrich lives. For example, the recently introduced Qualcomm Snapdragon Ride Platform — which combines hardware, software, open stacks, development kits, tools, and a robust ecosystem to help automakers deliver on consumer demands around improved safety, convenience, and autonomous driving— already includes some of our research. Our AI automotive initiatives will hopefully contribute to saving lives in our intelligent, connected future.

If you’re excited about solving big problems with cutting-edge AI research — and improving the lives of billions of people — we’d like to hear from you. We’re recruiting for several machine learning and autonomy R&D openings.

1. Vehicle Detection With Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors; Bence Major, Daniel Fontijne, Amin Ansari, Ravi Teja Sukhavasi, Radhika Gowaikar, Michael Hamilton, Sean Lee, Slawek Grzechnik , Sundar Subramanian • Proceedings of ICCV 2019 conference
2. Radar and Camera Early Fusion for Vehicle Detection in Advanced Driver Assistance Systems; Teck-Yian Lim, Amin Ansari, Bence Major, Daniel Fontijne, Michael Hamilton, Radhika Gowaikar, Sundar Subramanian • Proceedings of NeurIPS 2019 Workshop on Machine Learning for Autonomous Driving
Qualcomm Snapdragon Ride is a product of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries ("Qualcomm"). Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.

Related News

©2020 Qualcomm Technologies, Inc. and/or its affiliated companies.

References to "Qualcomm" may mean Qualcomm Incorporated, or subsidiaries or business units within the Qualcomm corporate structure, as applicable.

Qualcomm Incorporated includes Qualcomm's licensing business, QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of Qualcomm's engineering, research and development functions, and substantially all of its products and services businesses. Qualcomm products referenced on this page are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Materials that are as of a specific date, including but not limited to press releases, presentations, blog posts and webcasts, may have been superseded by subsequent events or disclosures.

Nothing in these materials is an offer to sell any of the components or devices referenced herein.