Industry

NVIDIA End-to-End Self-Driving

In a tech talk at University of Toronto, NVIDIA shared some updates regarding their research of self-driving car and End-to-End Learning

On January 19th, Nvidia hosted a tech talk at the University of Toronto. For this talk, NVIDIA shared some updates regarding their research in the self-driving car and End-to-End Learning, as well as some experiences and fun fact about AI development.

NVIDIA Corporation is an American technology company. Their main products are graphics processing units (GPU) for the gaming market, and system on a chip (SoC) units for the mobile computing and automotive market. However, their corporate direction recently underwent a big change with the advent of GPU’s power in AI research: GPUs are now acting as the brain of computers, robots and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing Company”.[1]

Owning up to their newfound fame, NVIDIA is putting massive efforts into AI computing research. Last year, NVIDIA teamed up with New York University’s deep learning team and started a research collaboration at their new auto tech office in New Jersey [2]. This team includes technology experts such as Urs Muller, chief architect-autonomous driving at NVIDIA, NYU professor Yann LeCun, a deep learning pioneer and inventor of learned convolutional neural networks, and Larry Jackel, Machine learning advisor for NVIDIA and a former DARPA program manager. Together, they will extend beyond current NVIDIA technology and engineering to create groundbreaking autonomous driving technology.

During the talk, Larry shared some experience about his life as an AI researcher. Back in 1986 when he was still working at Bell Labs, AI technology at the time was very primitive, and recurrent nets were blunt and inefficient. In 1988, Yann LeCun joined them and built the first “LeNet” OCR engine with learned features. Its performance was excellent, but they still had no systematic understanding of what governed the learning process. “And this is really where the deep learning started,” said Larry Jackel. Later, in the early 1990s with Vladimir Vapnik’s help, the team obtained a much better understanding of the learning process and its applications.

Larry also shared some fun stories about the old time, like the bets he made with Vapnik.

image

Figure 1 Bets between Larry and Vapnik.

image-1

Figure 2 Results of the 1995 bets.

From the pictures, we can see that those bets are a projection of the way deep learning developed. Neural network is still quite popular decades after it was first created.

June 2016: End to End Learning for Self-Driving Cars by NVIDIA

As mentioned before, Larry is a former DARPA program manager. There are lots of DARPA challenges, such as the Grand Challenges of driving in the desert (2003, 2005), DARPA Urban challenge (2007), and DARPA Robotic Challenge (2015). The Urban Challenge is directly connected to research in self-driving, and CMU won, with its approach becoming the core for many self-driving efforts. It required HD mapping, obstacle detection, cost maps and path planning. Although CMU’s performance was very impressive at the time, it was moving among an extremely detailed path, therefore it was still a long ways away from real self-driving cars.

Larry provided a flowchart for the usual approach for self-driving cars. Starting from sensors, then moving to feature extraction and objective recognition (often combined using convolutional nets), the data then proceed to the cost map and to help localize the position of the car by using detailed maps. Together, the localized position and cost map will provide the inputs required for path planning and actuation.

image-2

Figure 3. Flow chart of usual approach for self-driving cars

While Larry was running this kind approach in DARPA, with help of Yann, it eventually evolved into End-to-End learning. End-to-End learning is a structure of CNN (convolutional neural network) directly connecting the front-facing cameras to the steering commands to help train a self-driving car to drive smarter and autonomously . By using three cameras, the system can see what the driver sees and does, then record this vision, the driver’s commands, and the surrounding environment. Using this data, the system can then figure out an ideal driving strategy to achieve fully autonomous self-driving.

Larry explained that out of the three cameras, the main camera has the vision of the driver, and the side cameras help with training and interpolating the angle. While driving, the network would make a decision first, compare it to the human driver’s decision, then tune itself and repeat the process over again. The research team also introduced a random shifting rotation of side cameras, not because it helped with the training, but because without this setting, the driving direction will be undesirably lean: i.e the car might drive too close to the edge of the lane.

image-3

Figure 4. Flow chart of end-to-end learning system.

To avoid unnecessary costs, the learning system can check things (strategies, scenarios) in the simulator before applying it on the road. By feeding a library of prerecorded test videos and steering command into the network and producing an output command net, the system can use it to update the car’s position, then to create a computing frame by the database. NVIDIA’s GPUs plays a vital role in all the training processes.

More detail can be found in paper End to End Learning for Self-Driving Cars. [3]

At the end of this talk, the audience also asked some interesting questions.One person asked why NVIDIA chose cameras out of all the types of sensors, and not lidar or another options. Larry’s response was that while driving, lidar scans can get a million points in a second (if lucky), but a camera can get 30 times more data in a snapshot. Cameras are also much cheaper than lidar sensors. But Larry also mentioned that the current design is not final, and anything that can help the driving system would be considered.

At CES 2016, NVIDIA introduced Xavier, an AI supercomputer for autonomous transportation. Larry mentioned this supercomputer would be available later this year, and NVIDIA is very confident in Xavier’s performance. In the future, not only can researchers, but customers might actually have the opportunity to experience the NVIDIA solution as well.

 

Artificial Intelligence Computing Leadership from NVIDIA

As NVIDIA continues to define itself as an AI computing company, the self-driving car is not the only machine learning project they have. NVIDIA’s CEO Jen-Hsun Huang announced several other AI related products at CES 2016: the Geforce Now – a gaming cloud where one can play high-definition games in a low-spec computer by just using a browser, the New Shield – a family entertainment platform, the Nvidia spot – a tool to make all gadgets in your home autonomous. As Jen-Hsun zuang explained at CES, Nvidia will focus on four areas : video games, VR/AR/MR, cloud computing/data center, and self-driving.

Nvidia is also putting in a lot of effort towards AI education: GPU teaching programs, free source of CUDA (a parallel programming language for GPUs). Just as recent as Feb 5th, NVIDIA introduced a range of Quadro® products, all based on its Pascal™ architecture, that transform desktop workstations into supercomputers with breakthrough capabilities for professional workflows across many industries.[4]

 

References

[1]http://images.nvidia.com/content/pdf/about-nvidia/nvidia-2016.pdf

[2] https://blogs.nvidia.com/blog/2016/06/10/nyu-nvidia/

[3]End to End Learning for Self-Driving Cars https://arxiv.org/pdf/1604.07316v1.pdf

[4] http://nvidianews.nvidia.com/news/nvidia-powers-new-class-of-supercomputing-workstations-with-breakthrough-capabilities-for-design-and-engineering
*Special thanks to Mr.Larry Jackel for helping us with writing this report

 


Analyst: Shaoyou Lu | Localized by Synced Global Team : Xiang Chen

0 comments on “NVIDIA End-to-End Self-Driving

Leave a Reply

%d bloggers like this: