Chipmaker Nvidia is kicking the concept of self-driving cars into high gear with an artificial intelligence technology that would make it possible for cars to actually see their surroundings for safer passenger experiences.
Nvidia showed off the tech at CES 2016 in Las Vegas this week in front of about 400 automakers, analysts and media. The big idea is called Drive PX 2, a supercomputing platform designed for the specific needs of the auto industry.
The supercomputer processes 24 trillion deep learning operations every single second with 8 teraflops of processing power. To put that into perspective, that’s a 10 times stronger performance than the first generation of Drive PX, which over 50 auto companies are already leveraging. It’s also equivalent to the processing power of 150 MacBook Pros -- and it’s only the size of a lunchbox.
Volvo Demonstrates Safety
“Self-driving cars will revolutionize society,” Nvidia CEO Jen-Hsun Huang (pictured) said at CES. “And Nvidia’s vision is to enable them.” Drive PX 2 carries sensors that give a vehicle a 360-degree view of its surroundings, making the rearview mirror history, he said.
As part of its broader plans to put an end to deadly crashes, Volvo will be the first manufacturer to deploy Drive PX 2. In 2017, the Swedish auto brand will lease 100 XC90 luxury SUVs carrying the new tech. That means the new Volvos will drive automatically in Gothenburg, Sweden and semi-automatically in other cities, Nvidia said.
The self-driving car phenomenon is about safety and productivity. About 93 percent of car crashes are caused by human error at the wheel, and kill 1.3 million drivers annually, according to Nvidia's research. On top of that, American teenagers die from texting while driving in even greater numbers than drunk driving or other causes, Nvidia said. Autonomous vehicles may be much, much safer than human-operated vehicles, according to Nvidia.
On the productivity front, Americans waste an estimated 5.5 billion hours a year sitting in traffic. That translates to approximately $121 billion a year in lost productivity, according to Texas A&M’s Urban Mobility Report.
We turned to Charles King, principal analyst at Pund-IT, to get his take on the innovation. He told us the concept of self-driving cars is intriguing but the computational power required for autonomous vehicles is sizable.
“Nvidia's new platform shows a lot of promise both in its form factor -- a fraction of the size of some competitors' solutions -- and in the company's manufacturing partner ecosystem,” King said. “At the end of the day, even the most compelling technologies live and die on their ability to attract sizable numbers of developers and partners. Nvidia understands that dynamic fully, and has been careful in cultivating both aspects of its autonomous driving platform.”
Trio of Technologies
By navigating the many possible obstacles drivers respond to, from kids running in the street, to road construction crews, to heavy rain and more, Nvidia’s Drive PX 2 goes beyond the capabilities of computer vision-based self-driving solutions. Nvidia’s advantage is in deep learning technology -- specifically a trained deep neural network that sits on supercomputers in the cloud and taps into the experience of many tens of thousands of hours of road time.
The company said its solution for deep learning starts with Nvidia Digits, a supercomputer that exposes digital neural networks to data collected during a driving expedition. Drive PX 2 taps that training to foster a safer driving experience. At the same time, a suite of software tools, modules and libraries called DriveWorks works to hasten the development and testing of self-driving cars, according to Nvidia.
DriveWorks sets the foundation for calibrating sensors that collect, synchronize, record and process streams of data through a network of algorithms running on DrivePX’s processors, the company said. Huang told the audience at CES that machines are already outperforming humans at tasks that nobody ever thought possible, including recognizing images. Deep learning-trained systems can peg images more than 96 percent of the time, which is better than human performance, he said.