Computer Vision is a key technology for building algorithms to enable self-driving cars. One of the pioneering projects in this field was an experimental system called PilotNet by Nvidia. It uses a deep neural network (DNN) that takes image frames from cameras mounted on the front of a car and determines the trajectory (steering angle) that is to be applied to the steering wheel.
PilotNet's architecture is composed of the layers shown in Figure 1:
In a nutshell, input in the form of images from the cameras are transformed using a series of convolution layers to extract features. Fully connected layers are then used to output a single angle that the model believes the car's steering wheel should be turned in order to successfully navigate. Of course if you were to try and build this model with conventional tools (e.g., in TensorFlow) it would be difficult to visualize this architecture. This is where the PerceptiLabs visual modeling tool really shines, as it allows you to see the model as you build it.
With research into self-driving cars accelerating (pun intended) we thought why not recreate the PilotNet model in PerceptiLabs to show just how easy it is to build. Then to prove the point, we decided to do it in front of a live audience! Here’s what happened:
To train this model we used sample data from Udacity's car simulator as the input, which we pre-processed (normalized) using Google Colab. The data consists of frames taken from three cameras mounted on the front of the vehicle which are collectively used to train the model on navigation:
The resulting PilotNet model looks as follows in PerceptiLabs:
As our model and the video above show, you can quickly and more easily create models suitable for self-driving automotive applications in the PerceptiLabs visual modeling tool. Furthermore, you can visually inspect the resulting feature maps from the different convolution layers in the model, and watch as the model develops navigational intelligence during training.