Autonomous Driving

4 / 13

Localization & Mapping: Key challenges for autonomous vehicles.

Advanced Driver Assistance Systems (ADAS) and systems for autonomous vehicles fuse together a rich, diverse sensor suite using computer vision (CV), LIDAR, radar, GPS, and other sensors for perception and positioning which enable increased safety and autonomy. Unfortunately, these sensors, particularly highly accurate GPS systems, are currently too expensive for widespread adoption but are ultimately necessary for achieving sub-1 meter vehicular localization. Concurrently, an affordable, up-to-the-minute road mapping system is necessary for autonomous vehicles to understand ever-changing, dynamic road conditions in order to navigate potential obstacles while also obeying the rules of the road. Sophisticated mapping solutions exist today but providing global, real-time, updated, high-definition (HD) map data remains a challenge that requires low-cost, crowdsourced solutions.

Key Research Areas:

Groundbreaking innovations in continuous localization.

Our cutting-edge advancements in Visual-Inertial Odometry (VIO) have leveraged smartphone-level quality inertial sensors (accelerometer, gyroscope) and monocular camera systems and have adapted them to the automotive world. Fusing them together generates continuous localization resulting in a 6-DOF pose, which relates both the telemetry and orientation of the vehicle's position. VIO delivers accurate timestamping, allowing precise sensor synchronization as well as efficient processing provided by the Digital Signal Processor (DSP) which delivers highly optimized performance but at low wattage. Combining VIO with smartphone-grade GPS will enable sub-meter positioning globally and centimeter-level positioning on the map at a fraction of the cost compared to RTK GPS systems.

Fusing VIO with GPS enables precise positioning and enables lane-level accuracy.

Innovating C-V2X: A key technology enabler for ADAS and automation.

V2X (Vehicle-to-X) extends the range of local sensors by enabling inter-vehicular communication and is a critical component supporting our vision for autonomous vehicles. We already support V2X using Dedicated Short Range Communications (DSRC), based on 802.11p. C-V2X (Cellular V2X) aims to reuse the upper layers of DSRC, while providing better performing lower layers that have an active evolution via 5G NR.

C-V2X is an evolution of our device-to-device capability that enabled users to discover and interact with the world around them. Building on the LTE Direct standards developed in 3GPP Releases 12 and 13, our team evolved the technology and applied it to vehicles to support very high speeds (up to 500 km/hr relative), and the high device densities expected in busy roads, as part of 3GPP Release 14. The team has also been working on future Release 15/16 enhancements which will utilize 5G NR to provide vehicular communications with additional optimized functionality suitable for high throughput and very low latency with high reliability use cases.

Additionally, our team formed the 5G Automotive Association (5GAA), which unites the automotive and the communications sectors into a common body to promote C-V2X technology. 5GAA also formed working groups to enhance C-V2X's design and architecture and organize large-scale live demonstrations to demonstrate the technology's viability, leading to widespread adoption.

Connected cars as moving sensor platforms.

Our goal is to affordably empower vehicles to detect traffic signs and road lanes, localizing them first in the image frame and then converting them into the global frame to get their lat/long and place them in a 3D map. OEMs, HD map makers, and fleet owners will be able to leverage the crowdsourced sensor, road environment model, and mapping data to get a more accurate, real-time picture of what's happening on highways and urban/suburban streets.

Our team has developed state-of-the-art Deep Neural Network (DNN) methods to detect, classify, and localize lane markers and road signs with an optimized on-device low complexity implementation based on Qualcomm Snapdragon™ Neural Processing Engine and Qualcomm Symphony™ heterogeneous computational architecture. The vehicle's positioning engine includes camera for precise positioning, and also refreshes the map. On device, the engine uses triangulation to obtain the 6-DOF pose of landmarks from the camera's 6-DOF pose estimate. These 6-DOF pose estimates of landmarks along with the camera's 6-DOF pose estimate are shipped to the mapping server where it performs bundle adjustment across multiple trips and cars, thus enabling crowdsourced updates of HD map positioning features.

During data upload, vehicles essentially act as moving sensor platforms, collecting and sending metadata to the cloud at just tens of kilobytes per kilometer instead of raw sensor data streams at gigabytes of data per second. This is possible due to our edge analytics and proprietary cloud processing engine which creates the map. This map data is used by the vehicles for localization on a map, achieving 10 cm accuracy in the map frame and 1 m global accuracy.

HD Maps detects traffic signs and road lanes and places them on a 3D map.

 

Contact Us

If you find the work we're doing in autonomous driving to be exciting, and you have a background in machine learning, computer vision, positioning, C-V2X, and other autonomous technologies, we'd love to hear from you. Please visit us at www.qualcomm.com/company/careers to submit your resume. 

Videos

Qualcomm Drive Data Platform powers TomTom HD map Location

Apr 21, 2017

1:12

Qualcomm® drive data platform demo

Mar 2, 2017

3:30

Visual Inertial Odometry (VIO) for automotive

Jan 24, 2017

1:00

Vertex SSD

Jan 24, 2017

0:58