Powered by Tegra X1, DRIVE PX Auto-Pilot and DRIVE CX Cockpit Computers Deliver Computer Vision, Deep Learning, Unprecedented Graphics in Cars
Transporting the world closer to a future of auto-piloted cars that see and detect the world around them, NVIDIA today introduced NVIDIA DRIVE™ automotive computers – equipped with powerful capabilities for computer vision, deep learning and advanced cockpit visualization.
NVIDIA will offer two car computers: NVIDIA DRIVE PX, for developing auto-pilot capabilities, and NVIDIA DRIVE CX, for creating the most advanced digital cockpit systems. These automotive-grade in-vehicle computers are based on the same architecture used in today’s most powerful supercomputers.
“Mobile supercomputing will be central to tomorrow’s car,” said Jen-Hsun Huang, CEO and co-founder, NVIDIA. “With vast arrays of cameras and displays, cars of the future will see and increasingly understand their surroundings. Whether finding their way back to you from a parking spot or using situational awareness to keep out of harm’s way, future cars will do many amazing, seemingly intelligent things. Advances in computer vision, deep learning and graphics have finally put this dream within reach.
“NVIDIA DRIVE will accelerate the intelligent car revolution by putting the visual computing capabilities of supercomputers at the service of each driver.”
NVIDIA DRIVE PX
The NVIDIA DRIVE PX auto-pilot development platform provides the technical foundation for cars with completely new features that draw heavily on recent developments in computer vision and deep learning.
DRIVE PX leverages the new NVIDIA® Tegra® X1 mobile super chip, which is built on NVIDIA’s latest Maxwell™ GPU architecture and delivers over one teraflops of processing power, giving it more horsepower than the world’s fastest supercomputer of 15 years ago. DRIVE PX, featuring two Tegra X1 super chips, has inputs for up to 12 high-resolution cameras, and can process up to 1.3 gigapixels per second.
Its computer vision capabilities can enable Auto-Valet, allowing a car to find a parking space and park itself, without human intervention. While current systems offer assisted parallel parking in a specific spot, NVIDIA DRIVE PX can allow a car to discover open spaces in a crowded parking garage, park autonomously and then later return to pick up its driver when summoned from a smartphone.
The deep learning capabilities of DRIVE PX enable a car to learn to differentiate various types of vehicles — for example, discerning an ambulance from a delivery van, a police car from a regular sedan, or a parked car from one about to pull into traffic. As a result, a self-driving car can detect subtle details and react to the nuances of each situation, like a human driver.
NVIDIA DRIVE CX
The NVIDIA DRIVE CX cockpit computer is a complete solution with hardware and software to enable advanced graphics and computer vision for navigation, infotainment, digital instrument clusters and driver monitoring. It also enables Surround-Vision, which provides an undistorted top-down, 360-degree view of the car in real time — solving the problem of blind spots — and can completely replace a physical mirror with a digital smart mirror.
Available with either Tegra X1 or Tegra K1 processors, and complete road-tested software, the DRIVE CX can power up to 16.8 million pixels on multiple displays — more than 10 times that of current model cars.
Positive Industry Support
Ricky Hudi, executive vice president of Electrical/Electronics Development at AUDI AG, said: “Audi and NVIDIA share a common belief that machine learning is a powerful enhancement to our zFAS Piloted Driving technology. Thus, Audi sees DRIVE PX as a crucial tool for further research and development.”
Thilo Koslowski, vice president and distinguished analyst at Gartner, said: “The realization of smart automobiles requires high-performance processing solutions that enable sophisticated sensor fusion and innovative machine learning. This will create a new class of self-aware and ultimately self-driving vehicles that can assess, sense, understand and react to the state of their surroundings and occupants.”