Lane centering is a significant feature in the automotive industry and an important feature for advanced or autonomous vehicles, providing assistance to help drivers stay in their lane. The objective of this work is to utilize camera images which are processed by a lane-centering algorithm to make steering decisions. A model with a convolutional neural network (CNN) algorithm is used and simulated datasets are trained and tested. The goal is to make the program learn how to steer the vehicle autonomously and achieve end-to-end learning for steering command generation. The CNN can map raw pixels from cameras directly to steering commands without any intermediate feature engineering. A model is utilized which is created by NVIDIA researchers called the NVIDIA PilotNet. This network comprises five convolutional layers for feature extraction, followed by three fully connected layers for predicting the steering commands. The model is trained using two different sets of data to see how well the model performs with different types of data. The first set comes from Udacity’s Self-Driving Car Nanodegree Program, which uses their open-source Vehicle Simulation. Secondly, a dataset from the Mississippi State University Autonomous Vehicular Simulator (MAVS) is used. The training process involves reducing the error between predicted steering angles and the actual steering commands logged by the car. After the model has been trained, it is implemented for testing the car’s autonomous capabilities within the Self Driving Car Nanodegree Program Simulation. In this mode, the car demonstrates the ability to effectively track and navigate along the road lanes.
|