OnQ Blog

Part 2: Changing an industry. Changing how we drive. [video]

Introducing Qualcomm Snapdragon Ride

Enabling safe, comfortable, and affordable autonomous driving includes solving some of the most demanding and challenging technological problems. From centimeter-level localization to multimodal sensor perception, sensor fusion, behavior prediction, maneuver planning, and trajectory planning and control, each one of these functions introduces its own unique challenges to solve, verify, test, and deploy on the road.

At Qualcomm Technologies, the idea of designing a scalable autonomous driving platform started the meticulous planning and hard work that went into building the Snapdragon Ride Autonomous Stack, a robust solution that facilitates optimized implementation of autonomous driving functions such as highway autopilot. Building the stack first meant designing the right heterogeneous architecture to support an optimized compute platform that scales together with the autonomous stack to create a cohesive and robust platform. In this blog post, we provide a brief introduction for some of the technology blocks powering our autonomous functionalities.

Designing a robust system for highway autopilot

A robust autonomous driving system has to deal with countless environmental conditions that make it significantly more complex compared to basic driver assist systems, which do not take control from the driver. For example, in a highway autopilot scenario, the system has to deal with numerous functions such as lane keeping, lane change maneuvers, in-lane maneuvers, traffic jams, freeway interchanges, aggressive cut-ins from other vehicles, motorcycle lane-splitting, road profile estimation, hazard detection, and more. As the system scales to self-driving in urban conditions, the complexity increases exponentially.

Besides the algorithmic challenges, compute requirements for autonomous driving are huge, and achieving these requirements within certain thermal envelopes to accomplish affordable deployment is critical for enabling a robust system. For instance, the highway autopilot system needs to have sufficient computational horsepower to deal with the unpredictability of freeway environments. As the understanding of worst cases in environment and hard corner cases grow, the system also needs to have enough performance for future upgrades and scalability.

The Snapdragon Ride Highway Autopilot system is designed and optimized around two critical blocks:

  1. Snapdragon Ride Autonomous Stack
  2. Snapdragon Ride Autonomous Hardware platform

Let’s dig a bit deeper into the technology pieces…

Snapdragon Ride Autonomous Stack

The Snapdragon Ride Autonomous Stack has three key components:

  1. Perception - Sensor perception & Sensor fusion
  2. Localization - High-precision localization with Qualcomm® Vision Enhanced Precise Positioning (VEPP) & Map fusion
  3. Planning - Behavior prediction & Planning

Perception and Sensor Fusion Stack

Our perception stack relies mainly on cameras and radar to perceive the environment. The current Highway Autopilot system uses a total of eight cameras and six radars distributed around the car to provide 360° coverage. Robust object detection and classification is achieved through hybrid DL and signal processing algorithms applied to the camera and data from radar sensors. We are using 30+ deep learning (DL) networks performing various functions including 2D and 3D object detection and classification (cars, trucks, vans, bus, motorcycle, pedestrians), lane-type detection and classification, blinker state recognition, free space estimation, and more.

We also apply DL to radar sensor data, transforming the radar signal from a noisy signal to a rich structured signal providing bird-eye-view (BEV) 2D bounding boxes and size estimation.

All our DL networks are optimized to run on dedicated accelerators available on snapdragon processors, using in-house developed tools such as quantization-aware training, hardware-aware network pruning, and kernel optimization. We are also optimizing network architecture to our hardware using network architecture search techniques (NAS).

We have also developed a scalable and robust sensor fusion algorithm that fuses detection and classifications from all the camera and radar sensors. The sensor fusion algorithm utilizes positioning algorithms and optionally can use HD-Map and C-V2X if available. The sensor fusion algorithms output multi-layer perception of the road-world model, including dynamic objects tracks, static objects, occupancy, and occlusion grid.

Given that sensor perception can be noisy depending on environmental conditions, range of the sensors and the impact of weather conditions, we have developed our fusion algorithm to take into account these uncertainties and propagate them to the output of the sensor fusion in various forms. For example, well-tracked objects will include uncertainty metrics for each of their estimated parameters. Objects that just appeared in the scene (e.g., coming from far behind, or just coming out of occlusion), will have special representation for their uncertainty. This is vital to facilitate robust behavior planning algorithms that can perform decision making under uncertainty to achieve safety, comfort, assertiveness, and a human-like driving experience.

High Precision Localization

Developed with extensive R&D, our 3rd generation Vision Enhanced Precise Positioning (VEPP 3.0) algorithm utilizes our multi-frequency GNSS (MF-GNSS) solution along with inputs from camera, IMU, and CAN sensors to achieve lane level accuracy virtually anytime, anywhere. The solution does not require any prior information such as maps or feature matching. Our solution relies on a low-level fusion of all sensor inputs and has been tested in various countries and environmental conditions.

Map Fusion

Our map fusion algorithm is based on an innovative approach using particle filters to perform multi-hypothesis inference, with inputs from VEPP 3.0 and front and side cameras to identify localization features on the road. Given the high signal-to-noise ratio of VEPP 3.0 output, and the innovative multi-hypothesis particle filter in our map fusion algorithm, we only require a sparse HD-map in our map fusion to achieve centimeter-level accuracy. Our stack also supports APIs for leading HD-Map providers.

Behavior Prediction and Planning

Our behavior prediction algorithm is a hybrid rule and machine learning-based approach, where rules can take into account rules of the road, lane types, vehicle dynamics (e.g., limits of acceleration/deceleration or angular velocity), and the ML-based solution can leverage data collected from other humans sharing the road with the ego car.

By its nature, prediction must be probabilistic; it is considering the likelihood of different maneuvers/intentions for each dynamic agent relevant to the ego car. Besides predicting probabilistic intentions for each of the dynamic agents, we also need to predict the associated trajectory with each one of those intentions. This is a very complex and multi-dimensional problem.

Our planning algorithm is hierarchical in nature and can be divided into maneuver planning and motion planning. Maneuver or behavior planning is responsible for decision making for the next set of maneuvers for the ego car. Theoretically, this is a partially observable Markov decision process (POMDP), since we can only partially observe the intentions of other agents through predictions. Behavior planning’s main requirements are making decisions that achieve safe and comfortable driving experience under uncertainty. Several factors contribute to the uncertainty in the road world model (RWM) around the ego vehicle, which can be divided into uncertainty due to measurement noise and uncertainty associated with the environment. Examples of measurement noise include inherent sensor noise, errors in perception pipelines, and limited sensor range due to weather or latency in detecting/tracking stationary/slowly-moving objects or objects that suddenly appear from occluded areas. Examples for uncertainty from the environment are hidden RWM states due to occlusions from large obstacles or sharp turns, or parameters that cannot be physically sensed, such as intentions of humans participating in the environment, including other drivers, cyclists, and pedestrians.

We utilize a hybrid of rule-based and reinforcement learning-based algorithms to solve the behavior planning algorithm. The policies generated are then passed to a motion planning algorithm that performs trajectory and speed profile search to perform the required policy. The search algorithm needs to take into account acceleration and jerk profiles, along with collision checking at each future trajectory point with predicted trajectories from the behavior prediction block. This block can have significant compute requirements depending on the traffic densities, and the look-ahead time horizon for planning.

Snapdragon Ride hardware platform

Due to the complexities of a real-world environment, required comfort of human-like driving and robust safety needs, the computing power needed for autonomous driving is growing faster than ever.

Snapdragon Ride hardware platform is designed to support single safety SoC, multiple safety SoCs, or the safety SoC along with a safety accelerator for various levels of SAE autonomous driving requirements. For example, a single Snapdragon Ride SoC can support an SAE level L2-to-L3 solution which will allow a highway autopilot system to operate at scale with up to 30 tera operations per second (TOPS) while offering a small form factor that just requires passive cooling and multi-SoC solutions using combination of ADAS SoCs and autonomous accelerator can empower L4-L5 autonomous driving solutions for robo-taxis allowing for 700+ TOPs of performance while still consuming 130W which is low enough power consumption for mass automotive deployment.

However, just performance metrics such as TOPS, DMIPs, etc. are not enough and often deceiving to describe the effectiveness of any automotive compute system. This is where the Qualcomm Technologies’ approach of developing comprehensive solutions with a robust autonomy stack with hardware shines. At first, the stack provides deep insight into real-world complexities of designing a full system. Things like managing concurrencies, designing a safety OS and software middleware, sensor synchronization and calibration tools, and performance optimization tools etc. are integral for achieving real-world performance in a predictable and more efficient way.

Moreover, all this performance needs to be designed to support scalability and efficient thermal solutions in mind. Qualcomm Technologies’ DNA of managing complex power management, optimization and designing for extreme performance in tight power-budget brings a profound advantage for empowering autonomous driving solutions that do not need complex liquid cooling. For example, in the Snapdragon Ride Highway Autopilot system described above, the application processor is loaded with following workloads:

  • Perception & sensor fusion for eight cameras, six radars, and multiple other sensors such as MF-GNSS – this requires 30+ DL networks processing high resolution data streams to produce hundreds of detections and classifications for dynamic objects, static objects, lane types, traffic signs and lights, and free space
  • Localization w/ VEPP 3.0 & map fusion
  • Behavior prediction & planning using deep Learning/reinforcement Learning algorithms to build a road world model, predict actions and trajectories of other dynamic agents sharing the road, and make decisions for the ego car’s future maneuvers.

Running all these workloads concurrently requires significantly higher amount of TOPS in typical application processors, however with Qualcomm Technologies’ advanced neural processing engines, high throughput data pipes and performance optimization tools, such Highway Autopilot performance can be realized with better efficiency on TOPs, within a single SOC with best of TOPS/Watt and significantly optimized thermal solution.

Our comprehensive platform with higher performance, lower power, open software and multi-ECU aggregation capabilities helps Tier 1 suppliers and auto OEMs to scale from active safety to comfort to full self-driving with lower cost of development overall. And our unique, open stack combines an optimized software offering with hardware to provide greater customization and transparency, helping automakers to go to market quickly.

Automotive designers now have a smooth, clear development path from active safety into the promising market segment of convenience features. Plus, as their customers start demanding features for full self-driving, Qualcomm Technologies’ open platform offers them the competitive edge of scalability, lower development costs, and shorter time to commercialization. Follow our upcoming posts in this series for technical insights into the platform. OEMs and automotive designers will see how they can build comfort and convenience into autonomous driving without starting from scratch. Meanwhile, find out more about Qualcomm Technologies’ approach to automotive compute and wireless.

 

Qualcomm Snapdragon Ride and Qualcomm Vision Enhanced Precise Positioning are products of Qualcomm Technologies, Inc. and/or its subsidiaries.

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries ("Qualcomm"). Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries. The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.